Test Report: Docker_macOS 15909

                    
                      e9ad3cc97e70b666b291650f029d994a5d385064:2023-02-23:28036
                    
                

Test fail (72/253)

Order failed test Duration
147 TestIngressAddonLegacy/StartLegacyK8sCluster 261.86
149 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 99.52
150 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 113.37
151 TestIngressAddonLegacy/serial/ValidateIngressAddons 0.45
200 TestMultiNode/serial/DeployApp2Nodes 9.12
201 TestMultiNode/serial/PingHostFrom2Pods 4.66
221 TestRunningBinaryUpgrade 82.72
223 TestKubernetesUpgrade 55.12
224 TestMissingContainerUpgrade 201.94
239 TestStoppedBinaryUpgrade/Upgrade 1021.96
241 TestPause/serial/Start 38.29
251 TestNoKubernetes/serial/StartWithK8s 36.9
252 TestNoKubernetes/serial/StartWithStopK8s 62.42
253 TestNoKubernetes/serial/Start 63.81
256 TestNoKubernetes/serial/Stop 13.85
257 TestNoKubernetes/serial/StartNoArgs 61.13
259 TestNetworkPlugins/group/auto/Start 38.95
260 TestNetworkPlugins/group/kindnet/Start 39.84
261 TestNetworkPlugins/group/calico/Start 38.1
262 TestNetworkPlugins/group/custom-flannel/Start 40.76
263 TestNetworkPlugins/group/false/Start 35.5
264 TestNetworkPlugins/group/enable-default-cni/Start 36.13
265 TestNetworkPlugins/group/flannel/Start 43.23
266 TestNetworkPlugins/group/bridge/Start 40.31
267 TestNetworkPlugins/group/kubenet/Start 40.49
269 TestStartStop/group/old-k8s-version/serial/FirstStart 38.63
270 TestStartStop/group/old-k8s-version/serial/DeployApp 0.35
271 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.42
272 TestStartStop/group/old-k8s-version/serial/Stop 14.84
273 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.53
274 TestStartStop/group/old-k8s-version/serial/SecondStart 61.6
275 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 0.16
276 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 0.19
277 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.35
278 TestStartStop/group/old-k8s-version/serial/Pause 0.51
280 TestStartStop/group/no-preload/serial/FirstStart 37.57
281 TestStartStop/group/no-preload/serial/DeployApp 0.35
282 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.42
283 TestStartStop/group/no-preload/serial/Stop 15.11
284 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.53
285 TestStartStop/group/no-preload/serial/SecondStart 64.44
286 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 0.16
287 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 0.2
288 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.35
289 TestStartStop/group/no-preload/serial/Pause 0.51
291 TestStartStop/group/embed-certs/serial/FirstStart 43.17
292 TestStoppedBinaryUpgrade/MinikubeLogs 0.38
294 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.22
295 TestStartStop/group/embed-certs/serial/DeployApp 0.35
296 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.42
297 TestStartStop/group/embed-certs/serial/Stop 18.98
298 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 0.37
299 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.59
300 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.59
301 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
302 TestStartStop/group/embed-certs/serial/SecondStart 58.2
303 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.53
304 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 59.44
305 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 0.16
306 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 0.2
307 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
308 TestStartStop/group/embed-certs/serial/Pause 0.51
310 TestStartStop/group/newest-cni/serial/FirstStart 41.56
311 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 0.16
312 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 0.19
313 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
314 TestStartStop/group/default-k8s-diff-port/serial/Pause 0.53
317 TestStartStop/group/newest-cni/serial/Stop 13.29
318 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.53
319 TestStartStop/group/newest-cni/serial/SecondStart 58.92
322 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
323 TestStartStop/group/newest-cni/serial/Pause 0.51
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (261.86s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-611000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E0223 12:43:53.665808    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:46:09.887354    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:46:37.574389    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:46:46.556713    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.562801    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.573843    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.596021    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.637779    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.717862    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:46.878141    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:47.199518    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:47.840276    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:49.121819    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:51.683583    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:46:56.804484    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:47:07.046983    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:47:27.529570    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-611000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m21.830816477s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-611000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-611000 in cluster ingress-addon-legacy-611000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 12:43:42.522214    5086 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:43:42.522368    5086 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:43:42.522374    5086 out.go:309] Setting ErrFile to fd 2...
	I0223 12:43:42.522378    5086 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:43:42.522481    5086 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 12:43:42.523813    5086 out.go:303] Setting JSON to false
	I0223 12:43:42.542360    5086 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":797,"bootTime":1677184225,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 12:43:42.542449    5086 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 12:43:42.564178    5086 out.go:177] * [ingress-addon-legacy-611000] minikube v1.29.0 on Darwin 13.2
	I0223 12:43:42.606146    5086 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 12:43:42.606146    5086 notify.go:220] Checking for updates...
	I0223 12:43:42.628268    5086 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:43:42.650099    5086 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 12:43:42.671093    5086 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 12:43:42.692284    5086 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 12:43:42.714100    5086 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 12:43:42.735262    5086 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 12:43:42.795432    5086 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 12:43:42.795548    5086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:43:42.934839    5086 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 20:43:42.844343792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:43:42.956666    5086 out.go:177] * Using the docker driver based on user configuration
	I0223 12:43:42.978422    5086 start.go:296] selected driver: docker
	I0223 12:43:42.978449    5086 start.go:857] validating driver "docker" against <nil>
	I0223 12:43:42.978472    5086 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 12:43:42.982419    5086 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:43:43.123412    5086 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:51 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 20:43:43.032036508 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:43:43.123524    5086 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 12:43:43.123717    5086 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 12:43:43.145335    5086 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 12:43:43.166927    5086 cni.go:84] Creating CNI manager for ""
	I0223 12:43:43.166958    5086 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 12:43:43.166969    5086 start_flags.go:319] config:
	{Name:ingress-addon-legacy-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.lo
cal ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:43:43.188113    5086 out.go:177] * Starting control plane node ingress-addon-legacy-611000 in cluster ingress-addon-legacy-611000
	I0223 12:43:43.231210    5086 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 12:43:43.252978    5086 out.go:177] * Pulling base image ...
	I0223 12:43:43.295154    5086 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 12:43:43.295215    5086 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 12:43:43.352795    5086 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 12:43:43.352820    5086 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 12:43:43.399226    5086 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0223 12:43:43.399273    5086 cache.go:57] Caching tarball of preloaded images
	I0223 12:43:43.399611    5086 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 12:43:43.421259    5086 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0223 12:43:43.463042    5086 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 12:43:43.688982    5086 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I0223 12:43:54.465288    5086 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 12:43:54.465448    5086 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I0223 12:43:55.087526    5086 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I0223 12:43:55.087757    5086 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/config.json ...
	I0223 12:43:55.087785    5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/config.json: {Name:mk1e549380ea62e21517a4018d2dfab72fa04b23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:43:55.088060    5086 cache.go:193] Successfully downloaded all kic artifacts
	I0223 12:43:55.088088    5086 start.go:364] acquiring machines lock for ingress-addon-legacy-611000: {Name:mk9aab0310f9468d3dad74767a4969e82ab28a47 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 12:43:55.088178    5086 start.go:368] acquired machines lock for "ingress-addon-legacy-611000" in 83.118µs
	I0223 12:43:55.088205    5086 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-611000 Namespace:defau
lt APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 12:43:55.088251    5086 start.go:125] createHost starting for "" (driver="docker")
	I0223 12:43:55.150667    5086 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0223 12:43:55.151015    5086 start.go:159] libmachine.API.Create for "ingress-addon-legacy-611000" (driver="docker")
	I0223 12:43:55.151059    5086 client.go:168] LocalClient.Create starting
	I0223 12:43:55.151258    5086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 12:43:55.151340    5086 main.go:141] libmachine: Decoding PEM data...
	I0223 12:43:55.151373    5086 main.go:141] libmachine: Parsing certificate...
	I0223 12:43:55.151482    5086 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 12:43:55.151545    5086 main.go:141] libmachine: Decoding PEM data...
	I0223 12:43:55.151562    5086 main.go:141] libmachine: Parsing certificate...
	I0223 12:43:55.152415    5086 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-611000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 12:43:55.207714    5086 cli_runner.go:211] docker network inspect ingress-addon-legacy-611000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 12:43:55.207824    5086 network_create.go:281] running [docker network inspect ingress-addon-legacy-611000] to gather additional debugging logs...
	I0223 12:43:55.207841    5086 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-611000
	W0223 12:43:55.261808    5086 cli_runner.go:211] docker network inspect ingress-addon-legacy-611000 returned with exit code 1
	I0223 12:43:55.261834    5086 network_create.go:284] error running [docker network inspect ingress-addon-legacy-611000]: docker network inspect ingress-addon-legacy-611000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-611000
	I0223 12:43:55.261852    5086 network_create.go:286] output of [docker network inspect ingress-addon-legacy-611000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-611000
	
	** /stderr **
	I0223 12:43:55.261948    5086 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 12:43:55.317120    5086 network.go:209] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0005483b0}
	I0223 12:43:55.317153    5086 network_create.go:123] attempt to create docker network ingress-addon-legacy-611000 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0223 12:43:55.317218    5086 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-611000 ingress-addon-legacy-611000
	I0223 12:43:55.402614    5086 network_create.go:107] docker network ingress-addon-legacy-611000 192.168.49.0/24 created
	I0223 12:43:55.402663    5086 kic.go:117] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-611000" container
	I0223 12:43:55.402793    5086 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 12:43:55.458104    5086 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-611000 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-611000 --label created_by.minikube.sigs.k8s.io=true
	I0223 12:43:55.511713    5086 oci.go:103] Successfully created a docker volume ingress-addon-legacy-611000
	I0223 12:43:55.511850    5086 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-611000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-611000 --entrypoint /usr/bin/test -v ingress-addon-legacy-611000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 12:43:55.954535    5086 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-611000
	I0223 12:43:55.954594    5086 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 12:43:55.954609    5086 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 12:43:55.954736    5086 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-611000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 12:44:02.145695    5086 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-611000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.190792613s)
	I0223 12:44:02.145729    5086 kic.go:199] duration metric: took 6.191039 seconds to extract preloaded images to volume
	I0223 12:44:02.145844    5086 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 12:44:02.287033    5086 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-611000 --name ingress-addon-legacy-611000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-611000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-611000 --network ingress-addon-legacy-611000 --ip 192.168.49.2 --volume ingress-addon-legacy-611000:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 12:44:02.631547    5086 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Running}}
	I0223 12:44:02.689138    5086 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Status}}
	I0223 12:44:02.748529    5086 cli_runner.go:164] Run: docker exec ingress-addon-legacy-611000 stat /var/lib/dpkg/alternatives/iptables
	I0223 12:44:02.867541    5086 oci.go:144] the created container "ingress-addon-legacy-611000" has a running status.
	I0223 12:44:02.867575    5086 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa...
	I0223 12:44:03.008251    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 12:44:03.008320    5086 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 12:44:03.108984    5086 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Status}}
	I0223 12:44:03.164534    5086 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 12:44:03.164554    5086 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-611000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 12:44:03.265498    5086 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Status}}
	I0223 12:44:03.320907    5086 machine.go:88] provisioning docker machine ...
	I0223 12:44:03.320948    5086 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-611000"
	I0223 12:44:03.321062    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:03.377375    5086 main.go:141] libmachine: Using SSH client type: native
	I0223 12:44:03.377761    5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50516 <nil> <nil>}
	I0223 12:44:03.377775    5086 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-611000 && echo "ingress-addon-legacy-611000" | sudo tee /etc/hostname
	I0223 12:44:03.519610    5086 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-611000
	
	I0223 12:44:03.519681    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:03.576972    5086 main.go:141] libmachine: Using SSH client type: native
	I0223 12:44:03.577316    5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50516 <nil> <nil>}
	I0223 12:44:03.577336    5086 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-611000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-611000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-611000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 12:44:03.711856    5086 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 12:44:03.711882    5086 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-825/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-825/.minikube}
	I0223 12:44:03.711905    5086 ubuntu.go:177] setting up certificates
	I0223 12:44:03.711912    5086 provision.go:83] configureAuth start
	I0223 12:44:03.711997    5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-611000
	I0223 12:44:03.767981    5086 provision.go:138] copyHostCerts
	I0223 12:44:03.768027    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:44:03.768088    5086 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem, removing ...
	I0223 12:44:03.768095    5086 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:44:03.768218    5086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem (1078 bytes)
	I0223 12:44:03.768382    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:44:03.768420    5086 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem, removing ...
	I0223 12:44:03.768425    5086 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:44:03.768499    5086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem (1123 bytes)
	I0223 12:44:03.768611    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:44:03.768650    5086 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem, removing ...
	I0223 12:44:03.768655    5086 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:44:03.768720    5086 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem (1675 bytes)
	I0223 12:44:03.768831    5086 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-611000 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-611000]
	I0223 12:44:03.859369    5086 provision.go:172] copyRemoteCerts
	I0223 12:44:03.859423    5086 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 12:44:03.859470    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:03.915266    5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
	I0223 12:44:04.009994    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 12:44:04.010087    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 12:44:04.026966    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 12:44:04.027059    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0223 12:44:04.043801    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 12:44:04.043879    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0223 12:44:04.060474    5086 provision.go:86] duration metric: configureAuth took 348.538018ms
	I0223 12:44:04.060493    5086 ubuntu.go:193] setting minikube options for container-runtime
	I0223 12:44:04.060655    5086 config.go:182] Loaded profile config "ingress-addon-legacy-611000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 12:44:04.060718    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:04.117648    5086 main.go:141] libmachine: Using SSH client type: native
	I0223 12:44:04.118007    5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50516 <nil> <nil>}
	I0223 12:44:04.118024    5086 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 12:44:04.251967    5086 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 12:44:04.251986    5086 ubuntu.go:71] root file system type: overlay
	I0223 12:44:04.252132    5086 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 12:44:04.252227    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:04.308898    5086 main.go:141] libmachine: Using SSH client type: native
	I0223 12:44:04.309276    5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50516 <nil> <nil>}
	I0223 12:44:04.309325    5086 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 12:44:04.451376    5086 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 12:44:04.451492    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:04.508200    5086 main.go:141] libmachine: Using SSH client type: native
	I0223 12:44:04.508568    5086 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 50516 <nil> <nil>}
	I0223 12:44:04.508581    5086 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 12:44:05.121534    5086 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 20:44:04.449655934 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 12:44:05.121560    5086 machine.go:91] provisioned docker machine in 1.800608719s
	I0223 12:44:05.121566    5086 client.go:171] LocalClient.Create took 9.97037326s
	I0223 12:44:05.121592    5086 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-611000" took 9.970451342s
	I0223 12:44:05.121602    5086 start.go:300] post-start starting for "ingress-addon-legacy-611000" (driver="docker")
	I0223 12:44:05.121607    5086 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 12:44:05.121689    5086 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 12:44:05.121742    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:05.182175    5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
	I0223 12:44:05.276293    5086 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 12:44:05.279915    5086 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 12:44:05.279934    5086 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 12:44:05.279946    5086 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 12:44:05.279951    5086 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 12:44:05.279962    5086 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/addons for local assets ...
	I0223 12:44:05.280063    5086 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/files for local assets ...
	I0223 12:44:05.280242    5086 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> 20572.pem in /etc/ssl/certs
	I0223 12:44:05.280249    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /etc/ssl/certs/20572.pem
	I0223 12:44:05.280458    5086 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 12:44:05.287454    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:44:05.304375    5086 start.go:303] post-start completed in 182.761912ms
	I0223 12:44:05.304884    5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-611000
	I0223 12:44:05.361924    5086 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/config.json ...
	I0223 12:44:05.362347    5086 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 12:44:05.362413    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:05.418158    5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
	I0223 12:44:05.508480    5086 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 12:44:05.513207    5086 start.go:128] duration metric: createHost completed in 10.424816003s
	I0223 12:44:05.513223    5086 start.go:83] releasing machines lock for "ingress-addon-legacy-611000", held for 10.424903748s
	I0223 12:44:05.513306    5086 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-611000
	I0223 12:44:05.569508    5086 ssh_runner.go:195] Run: cat /version.json
	I0223 12:44:05.569542    5086 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 12:44:05.569583    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:05.569610    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:05.628604    5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
	I0223 12:44:05.628722    5086 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
	I0223 12:44:05.969791    5086 ssh_runner.go:195] Run: systemctl --version
	I0223 12:44:05.974264    5086 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 12:44:05.979084    5086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 12:44:05.999541    5086 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 12:44:05.999624    5086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0223 12:44:06.014204    5086 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0223 12:44:06.021974    5086 cni.go:307] configured [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 12:44:06.021989    5086 start.go:485] detecting cgroup driver to use...
	I0223 12:44:06.022001    5086 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:44:06.022082    5086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:44:06.035214    5086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.2"|' /etc/containerd/config.toml"
	I0223 12:44:06.043649    5086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 12:44:06.051881    5086 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 12:44:06.051938    5086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 12:44:06.060500    5086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:44:06.068758    5086 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 12:44:06.077080    5086 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:44:06.085403    5086 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 12:44:06.093047    5086 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 12:44:06.101304    5086 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 12:44:06.108302    5086 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 12:44:06.115251    5086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:44:06.180122    5086 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 12:44:06.252107    5086 start.go:485] detecting cgroup driver to use...
	I0223 12:44:06.252127    5086 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:44:06.252200    5086 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 12:44:06.262397    5086 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 12:44:06.262477    5086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 12:44:06.272560    5086 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:44:06.286198    5086 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 12:44:06.399038    5086 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 12:44:06.479151    5086 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 12:44:06.479169    5086 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 12:44:06.492009    5086 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:44:06.587108    5086 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 12:44:06.795296    5086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:44:06.819580    5086 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:44:06.886115    5086 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 23.0.1 ...
	I0223 12:44:06.886342    5086 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-611000 dig +short host.docker.internal
	I0223 12:44:07.017178    5086 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 12:44:07.017290    5086 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 12:44:07.021690    5086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:44:07.031420    5086 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:44:07.087420    5086 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I0223 12:44:07.087511    5086 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 12:44:07.106870    5086 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0223 12:44:07.106888    5086 docker.go:560] Images already preloaded, skipping extraction
	I0223 12:44:07.106982    5086 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 12:44:07.126951    5086 docker.go:630] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I0223 12:44:07.126967    5086 cache_images.go:84] Images are preloaded, skipping loading
	I0223 12:44:07.127051    5086 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 12:44:07.152785    5086 cni.go:84] Creating CNI manager for ""
	I0223 12:44:07.152804    5086 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 12:44:07.152817    5086 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 12:44:07.152833    5086 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-611000 NodeName:ingress-addon-legacy-611000 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 12:44:07.152946    5086 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-611000"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 12:44:07.153032    5086 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-611000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-611000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 12:44:07.153102    5086 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0223 12:44:07.160746    5086 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 12:44:07.160814    5086 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 12:44:07.168107    5086 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I0223 12:44:07.180505    5086 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0223 12:44:07.193037    5086 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2124 bytes)
	I0223 12:44:07.205539    5086 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0223 12:44:07.209541    5086 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:44:07.219043    5086 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000 for IP: 192.168.49.2
	I0223 12:44:07.219061    5086 certs.go:186] acquiring lock for shared ca certs: {Name:mk9b7a98958f4333f06cfa6d87963d4d7f2b94cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:44:07.219243    5086 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key
	I0223 12:44:07.219306    5086 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key
	I0223 12:44:07.219356    5086 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.key
	I0223 12:44:07.219369    5086 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.crt with IP's: []
	I0223 12:44:07.337418    5086 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.crt ...
	I0223 12:44:07.337431    5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.crt: {Name:mk129ec7f5a94c39da390d3fda302771208386c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:44:07.337759    5086 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.key ...
	I0223 12:44:07.337775    5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/client.key: {Name:mk57809e5122d9b38d6f444bd5a8f30310a55151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:44:07.338002    5086 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key.dd3b5fb2
	I0223 12:44:07.338018    5086 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 12:44:07.440031    5086 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt.dd3b5fb2 ...
	I0223 12:44:07.440040    5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt.dd3b5fb2: {Name:mkd9472f270de56d53ed5155231c608aab76cb5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:44:07.440273    5086 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key.dd3b5fb2 ...
	I0223 12:44:07.440281    5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key.dd3b5fb2: {Name:mk633821d5c92ba97ceefebba282c13eb5e823a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:44:07.440476    5086 certs.go:333] copying /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt
	I0223 12:44:07.440646    5086 certs.go:337] copying /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key
	I0223 12:44:07.440815    5086 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key
	I0223 12:44:07.440833    5086 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt with IP's: []
	I0223 12:44:07.738249    5086 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt ...
	I0223 12:44:07.738263    5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt: {Name:mk03bb3785ff7aa6ffb4e0b3c55bf5bd5a5b9025 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:44:07.738575    5086 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key ...
	I0223 12:44:07.738583    5086 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key: {Name:mkfed3d8981514ca42c94d5ecf0c8cbc980b582b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:44:07.738799    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 12:44:07.738835    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 12:44:07.738858    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 12:44:07.738880    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 12:44:07.738900    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 12:44:07.738922    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 12:44:07.738942    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 12:44:07.738965    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 12:44:07.739062    5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem (1338 bytes)
	W0223 12:44:07.739116    5086 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057_empty.pem, impossibly tiny 0 bytes
	I0223 12:44:07.739129    5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 12:44:07.739162    5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem (1078 bytes)
	I0223 12:44:07.739193    5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem (1123 bytes)
	I0223 12:44:07.739226    5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem (1675 bytes)
	I0223 12:44:07.739302    5086 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:44:07.739336    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /usr/share/ca-certificates/20572.pem
	I0223 12:44:07.739365    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:44:07.739385    5086 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem -> /usr/share/ca-certificates/2057.pem
	I0223 12:44:07.739917    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 12:44:07.757904    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0223 12:44:07.774576    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 12:44:07.791516    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/ingress-addon-legacy-611000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0223 12:44:07.808806    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 12:44:07.825544    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 12:44:07.842556    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 12:44:07.859490    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 12:44:07.876434    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /usr/share/ca-certificates/20572.pem (1708 bytes)
	I0223 12:44:07.893259    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 12:44:07.910033    5086 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem --> /usr/share/ca-certificates/2057.pem (1338 bytes)
	I0223 12:44:07.926923    5086 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 12:44:07.939439    5086 ssh_runner.go:195] Run: openssl version
	I0223 12:44:07.944783    5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 12:44:07.952788    5086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:44:07.956608    5086 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:44:07.956656    5086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:44:07.962064    5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 12:44:07.970142    5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2057.pem && ln -fs /usr/share/ca-certificates/2057.pem /etc/ssl/certs/2057.pem"
	I0223 12:44:07.978267    5086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2057.pem
	I0223 12:44:07.982421    5086 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:44:07.982465    5086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2057.pem
	I0223 12:44:07.987941    5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2057.pem /etc/ssl/certs/51391683.0"
	I0223 12:44:07.995678    5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20572.pem && ln -fs /usr/share/ca-certificates/20572.pem /etc/ssl/certs/20572.pem"
	I0223 12:44:08.003495    5086 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20572.pem
	I0223 12:44:08.007336    5086 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:44:08.007382    5086 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20572.pem
	I0223 12:44:08.012766    5086 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20572.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 12:44:08.020626    5086 kubeadm.go:401] StartCluster: {Name:ingress-addon-legacy-611000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-611000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimization
s:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:44:08.020742    5086 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 12:44:08.040224    5086 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 12:44:08.047941    5086 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 12:44:08.055233    5086 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 12:44:08.055288    5086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 12:44:08.062521    5086 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 12:44:08.062546    5086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 12:44:08.109644    5086 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0223 12:44:08.109695    5086 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 12:44:08.271425    5086 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 12:44:08.271524    5086 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 12:44:08.271606    5086 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 12:44:08.418756    5086 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 12:44:08.419275    5086 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 12:44:08.419324    5086 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 12:44:08.490033    5086 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 12:44:08.511717    5086 out.go:204]   - Generating certificates and keys ...
	I0223 12:44:08.511843    5086 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 12:44:08.511918    5086 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 12:44:08.630914    5086 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 12:44:08.843859    5086 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 12:44:09.045387    5086 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 12:44:09.160572    5086 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 12:44:09.271313    5086 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 12:44:09.271438    5086 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0223 12:44:09.473943    5086 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 12:44:09.474253    5086 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0223 12:44:09.630096    5086 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 12:44:09.702706    5086 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 12:44:09.764808    5086 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 12:44:09.764871    5086 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 12:44:09.920126    5086 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 12:44:10.066583    5086 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 12:44:10.161021    5086 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 12:44:10.359555    5086 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 12:44:10.360038    5086 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 12:44:10.381626    5086 out.go:204]   - Booting up control plane ...
	I0223 12:44:10.381846    5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 12:44:10.382033    5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 12:44:10.382152    5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 12:44:10.382276    5086 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 12:44:10.382568    5086 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 12:44:50.368675    5086 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 12:44:50.369153    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:44:50.369317    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:44:55.371225    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:44:55.371564    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:45:05.372086    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:45:05.372257    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:45:25.374406    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:45:25.374659    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:46:05.439875    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:46:05.440060    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:46:05.440076    5086 kubeadm.go:322] 
	I0223 12:46:05.440112    5086 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0223 12:46:05.440152    5086 kubeadm.go:322] 		timed out waiting for the condition
	I0223 12:46:05.440162    5086 kubeadm.go:322] 
	I0223 12:46:05.440190    5086 kubeadm.go:322] 	This error is likely caused by:
	I0223 12:46:05.440231    5086 kubeadm.go:322] 		- The kubelet is not running
	I0223 12:46:05.440311    5086 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 12:46:05.440317    5086 kubeadm.go:322] 
	I0223 12:46:05.440403    5086 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 12:46:05.440444    5086 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0223 12:46:05.440468    5086 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0223 12:46:05.440475    5086 kubeadm.go:322] 
	I0223 12:46:05.440556    5086 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 12:46:05.440616    5086 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0223 12:46:05.440622    5086 kubeadm.go:322] 
	I0223 12:46:05.440696    5086 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0223 12:46:05.440760    5086 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0223 12:46:05.440836    5086 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0223 12:46:05.440863    5086 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0223 12:46:05.440873    5086 kubeadm.go:322] 
	I0223 12:46:05.443289    5086 kubeadm.go:322] W0223 20:44:08.108759    1158 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0223 12:46:05.443443    5086 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 12:46:05.443512    5086 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 12:46:05.443627    5086 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0223 12:46:05.443725    5086 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 12:46:05.443828    5086 kubeadm.go:322] W0223 20:44:10.363292    1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 12:46:05.443929    5086 kubeadm.go:322] W0223 20:44:10.363983    1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 12:46:05.443990    5086 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 12:46:05.444054    5086 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	W0223 12:46:05.444253    5086 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 20:44:08.108759    1158 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 20:44:10.363292    1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 20:44:10.363983    1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-611000 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 20:44:08.108759    1158 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 20:44:10.363292    1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 20:44:10.363983    1158 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0223 12:46:05.444289    5086 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I0223 12:46:05.854826    5086 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:46:05.866325    5086 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 12:46:05.866385    5086 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 12:46:05.873829    5086 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 12:46:05.873852    5086 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 12:46:05.920870    5086 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0223 12:46:05.920917    5086 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 12:46:06.082677    5086 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 12:46:06.082777    5086 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 12:46:06.082859    5086 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 12:46:06.235168    5086 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 12:46:06.235638    5086 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 12:46:06.235672    5086 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 12:46:06.312527    5086 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 12:46:06.333704    5086 out.go:204]   - Generating certificates and keys ...
	I0223 12:46:06.333803    5086 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 12:46:06.333874    5086 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 12:46:06.333949    5086 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0223 12:46:06.333997    5086 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0223 12:46:06.334048    5086 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0223 12:46:06.334131    5086 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0223 12:46:06.334202    5086 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0223 12:46:06.334257    5086 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0223 12:46:06.334311    5086 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0223 12:46:06.334408    5086 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0223 12:46:06.334444    5086 kubeadm.go:322] [certs] Using the existing "sa" key
	I0223 12:46:06.334496    5086 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 12:46:06.398000    5086 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 12:46:06.496647    5086 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 12:46:06.676754    5086 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 12:46:06.831371    5086 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 12:46:06.831832    5086 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 12:46:06.853199    5086 out.go:204]   - Booting up control plane ...
	I0223 12:46:06.853333    5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 12:46:06.853488    5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 12:46:06.853599    5086 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 12:46:06.853701    5086 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 12:46:06.853966    5086 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 12:46:46.842224    5086 kubeadm.go:322] [kubelet-check] Initial timeout of 40s passed.
	I0223 12:46:46.842905    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:46:46.843206    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:46:51.844556    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:46:51.844817    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:47:01.846538    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:47:01.846770    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:47:21.848239    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:47:21.848446    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:48:01.850694    5086 kubeadm.go:322] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0223 12:48:01.850926    5086 kubeadm.go:322] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I0223 12:48:01.850937    5086 kubeadm.go:322] 
	I0223 12:48:01.851008    5086 kubeadm.go:322] 	Unfortunately, an error has occurred:
	I0223 12:48:01.851061    5086 kubeadm.go:322] 		timed out waiting for the condition
	I0223 12:48:01.851070    5086 kubeadm.go:322] 
	I0223 12:48:01.851115    5086 kubeadm.go:322] 	This error is likely caused by:
	I0223 12:48:01.851202    5086 kubeadm.go:322] 		- The kubelet is not running
	I0223 12:48:01.851415    5086 kubeadm.go:322] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0223 12:48:01.851426    5086 kubeadm.go:322] 
	I0223 12:48:01.851522    5086 kubeadm.go:322] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0223 12:48:01.851565    5086 kubeadm.go:322] 		- 'systemctl status kubelet'
	I0223 12:48:01.851602    5086 kubeadm.go:322] 		- 'journalctl -xeu kubelet'
	I0223 12:48:01.851608    5086 kubeadm.go:322] 
	I0223 12:48:01.851690    5086 kubeadm.go:322] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0223 12:48:01.851762    5086 kubeadm.go:322] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0223 12:48:01.851768    5086 kubeadm.go:322] 
	I0223 12:48:01.851847    5086 kubeadm.go:322] 	Here is one example how you may list all Kubernetes containers running in docker:
	I0223 12:48:01.851891    5086 kubeadm.go:322] 		- 'docker ps -a | grep kube | grep -v pause'
	I0223 12:48:01.851957    5086 kubeadm.go:322] 		Once you have found the failing container, you can inspect its logs with:
	I0223 12:48:01.851989    5086 kubeadm.go:322] 		- 'docker logs CONTAINERID'
	I0223 12:48:01.852001    5086 kubeadm.go:322] 
	I0223 12:48:01.854422    5086 kubeadm.go:322] W0223 20:46:05.920044    3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0223 12:48:01.854589    5086 kubeadm.go:322] 	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	I0223 12:48:01.854656    5086 kubeadm.go:322] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I0223 12:48:01.854761    5086 kubeadm.go:322] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
	I0223 12:48:01.854854    5086 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 12:48:01.854945    5086 kubeadm.go:322] W0223 20:46:06.836308    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 12:48:01.855043    5086 kubeadm.go:322] W0223 20:46:06.837138    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0223 12:48:01.855120    5086 kubeadm.go:322] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0223 12:48:01.855189    5086 kubeadm.go:322] To see the stack trace of this error execute with --v=5 or higher
	I0223 12:48:01.855216    5086 kubeadm.go:403] StartCluster complete in 3m53.767030501s
	I0223 12:48:01.855307    5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0223 12:48:01.875352    5086 logs.go:277] 0 containers: []
	W0223 12:48:01.875367    5086 logs.go:279] No container was found matching "kube-apiserver"
	I0223 12:48:01.875437    5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0223 12:48:01.895493    5086 logs.go:277] 0 containers: []
	W0223 12:48:01.895506    5086 logs.go:279] No container was found matching "etcd"
	I0223 12:48:01.895579    5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0223 12:48:01.913839    5086 logs.go:277] 0 containers: []
	W0223 12:48:01.913852    5086 logs.go:279] No container was found matching "coredns"
	I0223 12:48:01.913925    5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0223 12:48:01.932635    5086 logs.go:277] 0 containers: []
	W0223 12:48:01.932649    5086 logs.go:279] No container was found matching "kube-scheduler"
	I0223 12:48:01.932717    5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0223 12:48:01.951841    5086 logs.go:277] 0 containers: []
	W0223 12:48:01.951855    5086 logs.go:279] No container was found matching "kube-proxy"
	I0223 12:48:01.951929    5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0223 12:48:01.970647    5086 logs.go:277] 0 containers: []
	W0223 12:48:01.970667    5086 logs.go:279] No container was found matching "kube-controller-manager"
	I0223 12:48:01.970734    5086 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0223 12:48:01.989439    5086 logs.go:277] 0 containers: []
	W0223 12:48:01.989452    5086 logs.go:279] No container was found matching "kindnet"
	I0223 12:48:01.989459    5086 logs.go:123] Gathering logs for kubelet ...
	I0223 12:48:01.989467    5086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0223 12:48:02.028851    5086 logs.go:123] Gathering logs for dmesg ...
	I0223 12:48:02.028866    5086 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0223 12:48:02.041107    5086 logs.go:123] Gathering logs for describe nodes ...
	I0223 12:48:02.041119    5086 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0223 12:48:02.094178    5086 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0223 12:48:02.094189    5086 logs.go:123] Gathering logs for Docker ...
	I0223 12:48:02.094196    5086 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I0223 12:48:02.118674    5086 logs.go:123] Gathering logs for container status ...
	I0223 12:48:02.118687    5086 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0223 12:48:04.165651    5086 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046914807s)
	W0223 12:48:04.165774    5086 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 20:46:05.920044    3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 20:46:06.836308    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 20:46:06.837138    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0223 12:48:04.165789    5086 out.go:239] * 
	* 
	W0223 12:48:04.165921    5086 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 20:46:05.920044    3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 20:46:06.836308    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 20:46:06.837138    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 20:46:05.920044    3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 20:46:06.836308    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 20:46:06.837138    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 12:48:04.165934    5086 out.go:239] * 
	* 
	W0223 12:48:04.166556    5086 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 12:48:04.229090    5086 out.go:177] 
	W0223 12:48:04.292328    5086 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 20:46:05.920044    3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 20:46:06.836308    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 20:46:06.837138    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W0223 20:46:05.920044    3552 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W0223 20:46:06.836308    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W0223 20:46:06.837138    3552 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0223 12:48:04.292460    5086 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0223 12:48:04.292543    5086 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0223 12:48:04.314109    5086 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-611000 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (261.86s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (99.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-611000 addons enable ingress --alsologtostderr -v=5
E0223 12:48:08.490687    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:49:30.412344    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-611000 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m39.072858453s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 12:48:04.455007    5439 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:48:04.455285    5439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:48:04.455291    5439 out.go:309] Setting ErrFile to fd 2...
	I0223 12:48:04.455295    5439 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:48:04.455409    5439 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 12:48:04.477265    5439 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0223 12:48:04.498632    5439 config.go:182] Loaded profile config "ingress-addon-legacy-611000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 12:48:04.498658    5439 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-611000"
	I0223 12:48:04.498670    5439 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-611000"
	I0223 12:48:04.499270    5439 host.go:66] Checking if "ingress-addon-legacy-611000" exists ...
	I0223 12:48:04.500247    5439 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Status}}
	I0223 12:48:04.579782    5439 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0223 12:48:04.600660    5439 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I0223 12:48:04.621451    5439 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0223 12:48:04.642500    5439 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I0223 12:48:04.663704    5439 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0223 12:48:04.663730    5439 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15613 bytes)
	I0223 12:48:04.663840    5439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:48:04.722641    5439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
	I0223 12:48:04.823320    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:04.873911    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:04.873949    5439 retry.go:31] will retry after 361.02993ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:05.237232    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:05.291189    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:05.291210    5439 retry.go:31] will retry after 288.026997ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:05.579722    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:05.634275    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:05.634296    5439 retry.go:31] will retry after 385.98234ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:06.022625    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:06.076689    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:06.076705    5439 retry.go:31] will retry after 1.233077874s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:07.311543    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:07.364183    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:07.364206    5439 retry.go:31] will retry after 1.263141199s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:08.628143    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:08.681572    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:08.681588    5439 retry.go:31] will retry after 1.533481816s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:10.217358    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:10.270549    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:10.270566    5439 retry.go:31] will retry after 1.506407556s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:11.779338    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:11.835485    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:11.835505    5439 retry.go:31] will retry after 5.680137614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:17.516119    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:17.569704    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:17.569720    5439 retry.go:31] will retry after 4.860558893s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:22.432637    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:22.485060    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:22.485076    5439 retry.go:31] will retry after 5.2730191s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:27.760550    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:27.815743    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:27.815758    5439 retry.go:31] will retry after 9.525833449s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:37.343590    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:37.398218    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:37.398232    5439 retry.go:31] will retry after 14.246054133s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:51.645880    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:48:51.699480    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:48:51.699493    5439 retry.go:31] will retry after 21.741472973s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:13.442787    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:49:13.496336    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:13.496357    5439 retry.go:31] will retry after 29.818687277s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:43.317929    5439 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W0223 12:49:43.372501    5439 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:43.372530    5439 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-611000"
	I0223 12:49:43.394126    5439 out.go:177] * Verifying ingress addon...
	I0223 12:49:43.416339    5439 out.go:177] 
	W0223 12:49:43.438124    5439 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-611000" does not exist: client config: context "ingress-addon-legacy-611000" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-611000" does not exist: client config: context "ingress-addon-legacy-611000" does not exist]
	W0223 12:49:43.438153    5439 out.go:239] * 
	* 
	W0223 12:49:43.441779    5439 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 12:49:43.463059    5439 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-611000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-611000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb",
	        "Created": "2023-02-23T20:44:02.339675473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 48543,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T20:44:02.624221722Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/hosts",
	        "LogPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb-json.log",
	        "Name": "/ingress-addon-legacy-611000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-611000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-611000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b-init/diff:/var/lib/docker/overlay2/8ec2612a0ddcb8334b31fa2e2bc600c6d5b9a8c44165b2b56481359e67f82632/diff:/var/lib/docker/overlay2/5a4fcd864af35524d91e9f03f7a3ee889f13eb86bb854aeb6e62c3838280d5fc/diff:/var/lib/docker/overlay2/ca9e0d5e9bddb9a2d473c37bab2ac5f9f184126f5fb6e4c745f3be8914c03532/diff:/var/lib/docker/overlay2/619c31ca980751eda08bd35f1a83d95b3063245da47b494f158d072021494f4c/diff:/var/lib/docker/overlay2/7d620f2b5b85f7324d49fb2708fb7d4f1db9ff6b108d4ca3c6e3f6e8898b3ccc/diff:/var/lib/docker/overlay2/4ddfbadfca4c3e934e23063eb72f0a8b496f080e58fde7b65d0d73fac442087a/diff:/var/lib/docker/overlay2/27b7006de0c1a19fcc1c6121cd2f4e901780b83b732ce0880bc790e4d703cca6/diff:/var/lib/docker/overlay2/db9789081d8550dc6534127eb8db4d8c036eb99ed233cd3b179dcdd2148a8383/diff:/var/lib/docker/overlay2/78c4cb6843b7d55ed4487f84ff898a18bd4cf5b3ed008c952adc374157e890e2/diff:/var/lib/docker/overlay2/03a217
ffcc58371b47ca0920df99dd665be045c23519c8cf9abab2bdab1c5054/diff:/var/lib/docker/overlay2/011d725b17aadc4eb439b621974c407496cba93a833556a743d66552c707c1dc/diff:/var/lib/docker/overlay2/0b008f9fc314f9c01e518f7460862c8547f3d93385956a53f28f98fcd75dadd6/diff:/var/lib/docker/overlay2/356adf5e7cf2a827d25ddea32416e1a9e7d00b4b0adba15e70b4851516eaf000/diff:/var/lib/docker/overlay2/c9670a6f6981744d99152f0dbb1d59bf038363e715ac12f11e6ac3afec9650e4/diff:/var/lib/docker/overlay2/ab49bf4c3150a4da37f8525728f9da7e0aaded3fe8a24f903933eacd72f241da/diff:/var/lib/docker/overlay2/384753914be6edc5df597f20420a7b590d74a58e09b4f7eea9d19f5ccd3a971d/diff:/var/lib/docker/overlay2/a055650e8b909c9a2df13d514e5fcc459a3456dbcc9bc4597740578105e5f705/diff:/var/lib/docker/overlay2/985a888024d5ed2ee945bf037da4836977930ed967631a6e18255471a7b729c4/diff:/var/lib/docker/overlay2/591f52d09d50d8870b1601d17c65c0767b1d2e1db18e67a25b132b849fea51b2/diff:/var/lib/docker/overlay2/e64bda0fa456ba46eaadd53b798f3bb3a7fb3e3956685834382f9aa1e7c905f9/diff:/var/lib/d
ocker/overlay2/f698a91600258430cf3c97106cbb6ffbbba5818713bca72a2aba46cf92255e27/diff:/var/lib/docker/overlay2/1323dd726fea756f28381ac36970e1171e467b330f1d43ed15be5a82f7d8a892/diff:/var/lib/docker/overlay2/9607967e3631ebbf10a2e397fc287ae0fbbed8fc54f3bf39da1d050a410bb255/diff:/var/lib/docker/overlay2/e12a332b82c5db56dbc7e53aaa44c06434b071764e20d913001f71d97fadd232/diff:/var/lib/docker/overlay2/97a4d1655b4f47448f2f200a6b8f150e8f2960d0d6ff2b0920fd238d9fdc2c31/diff:/var/lib/docker/overlay2/15df85038e2f3436e3b23a6a35b84dcfaf3a735e506bc5af660c42519ede298b/diff:/var/lib/docker/overlay2/f29a318a8cfae29d19562dd7912e063084b1d321d8ea83b99f2808e363cec6bc/diff:/var/lib/docker/overlay2/73ecd3a5605dfc1ae938831bd261835b5bb3bf460857b84c0fbdb5ffcb290ea4/diff:/var/lib/docker/overlay2/949f2d40b73ae371ac4e7c81ef706a01da68e0a57145f13a3fb86c7eced257ef/diff:/var/lib/docker/overlay2/8d25550160c88d6c241f448420dd26daecce6bec8f774f2856a177a168ce3fe6/diff:/var/lib/docker/overlay2/27cbe8818217798c2761338718966cd435aaffff19e407bc5f20e21a831
c0172/diff:/var/lib/docker/overlay2/a8f41e83c2e19c1acaeb75ef0ef6daafe8f0c5675eb7a992ea4ad209f87b46b2/diff:/var/lib/docker/overlay2/4f127e69080651067a861bb1f9bbd08f2f57f6e05be509454e3e2a0cb0ecb178/diff:/var/lib/docker/overlay2/8bb03066bbd99667f78fb7ff8ed0939f8b06292372682c8f4a89d827588f18e6/diff:/var/lib/docker/overlay2/73261e58d3c16db540f287c0ddcdf6f3c4b9c869786e4e7a661931de7d55843e/diff:/var/lib/docker/overlay2/d48b7bafe3c2c5c869e17e7b043f3b4a5e5a13904f8fee77e9c429d43728fca9/diff:/var/lib/docker/overlay2/2e7b5043b64f757d5a308975d9ad9a451757a9fa450a726ce95e73347c79827a/diff:/var/lib/docker/overlay2/e8b366c628c74f57c66fd24385fa652cb7cfa81cec087f8ccec4ab98a6ae74d3/diff:/var/lib/docker/overlay2/3bb66a3fc586cafc4962828727dae244c9ee067ec0243f3f41f4e8fd1466ea80/diff:/var/lib/docker/overlay2/414633bd8851e03d3803cf3f8aa8c554a49cca39dff0d98db607dc81f318caea/diff:/var/lib/docker/overlay2/b2138b716615229ce59ff1ce8021afd5ca9d54aa64dfb7a928f137245788c9af/diff:/var/lib/docker/overlay2/51951ea2e125ce6991f056da1954df04375089
bd3c3897a92ee7e036a2a2e9ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-611000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-611000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-611000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-611000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-611000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1acac552c61643db7a08c83a0b8360ca99d618df3f68bbfa72e6d3ca0b181a4b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50516"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50517"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50518"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50519"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50520"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1acac552c616",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-611000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ecc0ec0df28",
	                        "ingress-addon-legacy-611000"
	                    ],
	                    "NetworkID": "a358584ee8df7d60fb13eae2091bcaa5338550f8c195056d98256cfe40f5d4fd",
	                    "EndpointID": "5e375f01c44a5a74e70535bf5e6cf607e1ccd2ba5a826665e83f499e4287936e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-611000 -n ingress-addon-legacy-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-611000 -n ingress-addon-legacy-611000: exit status 6 (391.490419ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 12:49:43.927183    5539 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-611000" does not appear in /Users/jenkins/minikube-integration/15909-825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-611000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (99.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (113.37s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-611000 addons enable ingress-dns --alsologtostderr -v=5
E0223 12:51:09.892231    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-611000 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m52.916895481s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 12:49:43.980740    5550 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:49:43.981021    5550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:49:43.981026    5550 out.go:309] Setting ErrFile to fd 2...
	I0223 12:49:43.981031    5550 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:49:43.981152    5550 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 12:49:44.002688    5550 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I0223 12:49:44.024400    5550 config.go:182] Loaded profile config "ingress-addon-legacy-611000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I0223 12:49:44.024432    5550 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-611000"
	I0223 12:49:44.024444    5550 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-611000"
	I0223 12:49:44.024953    5550 host.go:66] Checking if "ingress-addon-legacy-611000" exists ...
	I0223 12:49:44.025901    5550 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-611000 --format={{.State.Status}}
	I0223 12:49:44.105173    5550 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I0223 12:49:44.127033    5550 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I0223 12:49:44.149071    5550 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0223 12:49:44.149110    5550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I0223 12:49:44.149277    5550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-611000
	I0223 12:49:44.207449    5550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:50516 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/ingress-addon-legacy-611000/id_rsa Username:docker}
	I0223 12:49:44.307934    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:44.358652    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:44.358692    5550 retry.go:31] will retry after 159.640195ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:44.518994    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:44.572769    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:44.572786    5550 retry.go:31] will retry after 552.146474ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:45.125082    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:45.178179    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:45.178196    5550 retry.go:31] will retry after 445.55973ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:45.625376    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:45.680069    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:45.680085    5550 retry.go:31] will retry after 658.039198ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:46.340403    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:46.392889    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:46.392909    5550 retry.go:31] will retry after 1.162223814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:47.557382    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:47.611205    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:47.611220    5550 retry.go:31] will retry after 1.628212583s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:49.240640    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:49.292811    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:49.292825    5550 retry.go:31] will retry after 3.411700292s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:52.704753    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:52.756384    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:52.756400    5550 retry.go:31] will retry after 3.007452705s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:55.764221    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:49:55.817129    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:49:55.817148    5550 retry.go:31] will retry after 4.514406054s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:50:00.333906    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:50:00.387188    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:50:00.387202    5550 retry.go:31] will retry after 6.442243403s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:50:06.830303    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:50:06.884107    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:50:06.884122    5550 retry.go:31] will retry after 18.944393687s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:50:25.829568    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:50:25.883206    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:50:25.883219    5550 retry.go:31] will retry after 29.16557635s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:50:55.049568    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:50:55.102573    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:50:55.102590    5550 retry.go:31] will retry after 41.604749378s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:51:36.710326    5550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W0223 12:51:36.764787    5550 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0223 12:51:36.786425    5550 out.go:177] 
	W0223 12:51:36.807385    5550 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W0223 12:51:36.807424    5550 out.go:239] * 
	* 
	W0223 12:51:36.811966    5550 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 12:51:36.833124    5550 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-611000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-611000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb",
	        "Created": "2023-02-23T20:44:02.339675473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 48543,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T20:44:02.624221722Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/hosts",
	        "LogPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb-json.log",
	        "Name": "/ingress-addon-legacy-611000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-611000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-611000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b-init/diff:/var/lib/docker/overlay2/8ec2612a0ddcb8334b31fa2e2bc600c6d5b9a8c44165b2b56481359e67f82632/diff:/var/lib/docker/overlay2/5a4fcd864af35524d91e9f03f7a3ee889f13eb86bb854aeb6e62c3838280d5fc/diff:/var/lib/docker/overlay2/ca9e0d5e9bddb9a2d473c37bab2ac5f9f184126f5fb6e4c745f3be8914c03532/diff:/var/lib/docker/overlay2/619c31ca980751eda08bd35f1a83d95b3063245da47b494f158d072021494f4c/diff:/var/lib/docker/overlay2/7d620f2b5b85f7324d49fb2708fb7d4f1db9ff6b108d4ca3c6e3f6e8898b3ccc/diff:/var/lib/docker/overlay2/4ddfbadfca4c3e934e23063eb72f0a8b496f080e58fde7b65d0d73fac442087a/diff:/var/lib/docker/overlay2/27b7006de0c1a19fcc1c6121cd2f4e901780b83b732ce0880bc790e4d703cca6/diff:/var/lib/docker/overlay2/db9789081d8550dc6534127eb8db4d8c036eb99ed233cd3b179dcdd2148a8383/diff:/var/lib/docker/overlay2/78c4cb6843b7d55ed4487f84ff898a18bd4cf5b3ed008c952adc374157e890e2/diff:/var/lib/docker/overlay2/03a217
ffcc58371b47ca0920df99dd665be045c23519c8cf9abab2bdab1c5054/diff:/var/lib/docker/overlay2/011d725b17aadc4eb439b621974c407496cba93a833556a743d66552c707c1dc/diff:/var/lib/docker/overlay2/0b008f9fc314f9c01e518f7460862c8547f3d93385956a53f28f98fcd75dadd6/diff:/var/lib/docker/overlay2/356adf5e7cf2a827d25ddea32416e1a9e7d00b4b0adba15e70b4851516eaf000/diff:/var/lib/docker/overlay2/c9670a6f6981744d99152f0dbb1d59bf038363e715ac12f11e6ac3afec9650e4/diff:/var/lib/docker/overlay2/ab49bf4c3150a4da37f8525728f9da7e0aaded3fe8a24f903933eacd72f241da/diff:/var/lib/docker/overlay2/384753914be6edc5df597f20420a7b590d74a58e09b4f7eea9d19f5ccd3a971d/diff:/var/lib/docker/overlay2/a055650e8b909c9a2df13d514e5fcc459a3456dbcc9bc4597740578105e5f705/diff:/var/lib/docker/overlay2/985a888024d5ed2ee945bf037da4836977930ed967631a6e18255471a7b729c4/diff:/var/lib/docker/overlay2/591f52d09d50d8870b1601d17c65c0767b1d2e1db18e67a25b132b849fea51b2/diff:/var/lib/docker/overlay2/e64bda0fa456ba46eaadd53b798f3bb3a7fb3e3956685834382f9aa1e7c905f9/diff:/var/lib/d
ocker/overlay2/f698a91600258430cf3c97106cbb6ffbbba5818713bca72a2aba46cf92255e27/diff:/var/lib/docker/overlay2/1323dd726fea756f28381ac36970e1171e467b330f1d43ed15be5a82f7d8a892/diff:/var/lib/docker/overlay2/9607967e3631ebbf10a2e397fc287ae0fbbed8fc54f3bf39da1d050a410bb255/diff:/var/lib/docker/overlay2/e12a332b82c5db56dbc7e53aaa44c06434b071764e20d913001f71d97fadd232/diff:/var/lib/docker/overlay2/97a4d1655b4f47448f2f200a6b8f150e8f2960d0d6ff2b0920fd238d9fdc2c31/diff:/var/lib/docker/overlay2/15df85038e2f3436e3b23a6a35b84dcfaf3a735e506bc5af660c42519ede298b/diff:/var/lib/docker/overlay2/f29a318a8cfae29d19562dd7912e063084b1d321d8ea83b99f2808e363cec6bc/diff:/var/lib/docker/overlay2/73ecd3a5605dfc1ae938831bd261835b5bb3bf460857b84c0fbdb5ffcb290ea4/diff:/var/lib/docker/overlay2/949f2d40b73ae371ac4e7c81ef706a01da68e0a57145f13a3fb86c7eced257ef/diff:/var/lib/docker/overlay2/8d25550160c88d6c241f448420dd26daecce6bec8f774f2856a177a168ce3fe6/diff:/var/lib/docker/overlay2/27cbe8818217798c2761338718966cd435aaffff19e407bc5f20e21a831
c0172/diff:/var/lib/docker/overlay2/a8f41e83c2e19c1acaeb75ef0ef6daafe8f0c5675eb7a992ea4ad209f87b46b2/diff:/var/lib/docker/overlay2/4f127e69080651067a861bb1f9bbd08f2f57f6e05be509454e3e2a0cb0ecb178/diff:/var/lib/docker/overlay2/8bb03066bbd99667f78fb7ff8ed0939f8b06292372682c8f4a89d827588f18e6/diff:/var/lib/docker/overlay2/73261e58d3c16db540f287c0ddcdf6f3c4b9c869786e4e7a661931de7d55843e/diff:/var/lib/docker/overlay2/d48b7bafe3c2c5c869e17e7b043f3b4a5e5a13904f8fee77e9c429d43728fca9/diff:/var/lib/docker/overlay2/2e7b5043b64f757d5a308975d9ad9a451757a9fa450a726ce95e73347c79827a/diff:/var/lib/docker/overlay2/e8b366c628c74f57c66fd24385fa652cb7cfa81cec087f8ccec4ab98a6ae74d3/diff:/var/lib/docker/overlay2/3bb66a3fc586cafc4962828727dae244c9ee067ec0243f3f41f4e8fd1466ea80/diff:/var/lib/docker/overlay2/414633bd8851e03d3803cf3f8aa8c554a49cca39dff0d98db607dc81f318caea/diff:/var/lib/docker/overlay2/b2138b716615229ce59ff1ce8021afd5ca9d54aa64dfb7a928f137245788c9af/diff:/var/lib/docker/overlay2/51951ea2e125ce6991f056da1954df04375089
bd3c3897a92ee7e036a2a2e9ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-611000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-611000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-611000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-611000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-611000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1acac552c61643db7a08c83a0b8360ca99d618df3f68bbfa72e6d3ca0b181a4b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50516"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50517"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50518"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50519"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50520"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1acac552c616",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-611000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ecc0ec0df28",
	                        "ingress-addon-legacy-611000"
	                    ],
	                    "NetworkID": "a358584ee8df7d60fb13eae2091bcaa5338550f8c195056d98256cfe40f5d4fd",
	                    "EndpointID": "5e375f01c44a5a74e70535bf5e6cf607e1ccd2ba5a826665e83f499e4287936e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-611000 -n ingress-addon-legacy-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-611000 -n ingress-addon-legacy-611000: exit status 6 (388.003185ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 12:51:37.295992    5672 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-611000" does not appear in /Users/jenkins/minikube-integration/15909-825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-611000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (113.37s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:171: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-611000
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-611000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb",
	        "Created": "2023-02-23T20:44:02.339675473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 48543,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T20:44:02.624221722Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/hosts",
	        "LogPath": "/var/lib/docker/containers/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb/5ecc0ec0df28e7f3b24a68532bcc9db2427866cc8420666494e5025447a5d1bb-json.log",
	        "Name": "/ingress-addon-legacy-611000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ingress-addon-legacy-611000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-611000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b-init/diff:/var/lib/docker/overlay2/8ec2612a0ddcb8334b31fa2e2bc600c6d5b9a8c44165b2b56481359e67f82632/diff:/var/lib/docker/overlay2/5a4fcd864af35524d91e9f03f7a3ee889f13eb86bb854aeb6e62c3838280d5fc/diff:/var/lib/docker/overlay2/ca9e0d5e9bddb9a2d473c37bab2ac5f9f184126f5fb6e4c745f3be8914c03532/diff:/var/lib/docker/overlay2/619c31ca980751eda08bd35f1a83d95b3063245da47b494f158d072021494f4c/diff:/var/lib/docker/overlay2/7d620f2b5b85f7324d49fb2708fb7d4f1db9ff6b108d4ca3c6e3f6e8898b3ccc/diff:/var/lib/docker/overlay2/4ddfbadfca4c3e934e23063eb72f0a8b496f080e58fde7b65d0d73fac442087a/diff:/var/lib/docker/overlay2/27b7006de0c1a19fcc1c6121cd2f4e901780b83b732ce0880bc790e4d703cca6/diff:/var/lib/docker/overlay2/db9789081d8550dc6534127eb8db4d8c036eb99ed233cd3b179dcdd2148a8383/diff:/var/lib/docker/overlay2/78c4cb6843b7d55ed4487f84ff898a18bd4cf5b3ed008c952adc374157e890e2/diff:/var/lib/docker/overlay2/03a217
ffcc58371b47ca0920df99dd665be045c23519c8cf9abab2bdab1c5054/diff:/var/lib/docker/overlay2/011d725b17aadc4eb439b621974c407496cba93a833556a743d66552c707c1dc/diff:/var/lib/docker/overlay2/0b008f9fc314f9c01e518f7460862c8547f3d93385956a53f28f98fcd75dadd6/diff:/var/lib/docker/overlay2/356adf5e7cf2a827d25ddea32416e1a9e7d00b4b0adba15e70b4851516eaf000/diff:/var/lib/docker/overlay2/c9670a6f6981744d99152f0dbb1d59bf038363e715ac12f11e6ac3afec9650e4/diff:/var/lib/docker/overlay2/ab49bf4c3150a4da37f8525728f9da7e0aaded3fe8a24f903933eacd72f241da/diff:/var/lib/docker/overlay2/384753914be6edc5df597f20420a7b590d74a58e09b4f7eea9d19f5ccd3a971d/diff:/var/lib/docker/overlay2/a055650e8b909c9a2df13d514e5fcc459a3456dbcc9bc4597740578105e5f705/diff:/var/lib/docker/overlay2/985a888024d5ed2ee945bf037da4836977930ed967631a6e18255471a7b729c4/diff:/var/lib/docker/overlay2/591f52d09d50d8870b1601d17c65c0767b1d2e1db18e67a25b132b849fea51b2/diff:/var/lib/docker/overlay2/e64bda0fa456ba46eaadd53b798f3bb3a7fb3e3956685834382f9aa1e7c905f9/diff:/var/lib/d
ocker/overlay2/f698a91600258430cf3c97106cbb6ffbbba5818713bca72a2aba46cf92255e27/diff:/var/lib/docker/overlay2/1323dd726fea756f28381ac36970e1171e467b330f1d43ed15be5a82f7d8a892/diff:/var/lib/docker/overlay2/9607967e3631ebbf10a2e397fc287ae0fbbed8fc54f3bf39da1d050a410bb255/diff:/var/lib/docker/overlay2/e12a332b82c5db56dbc7e53aaa44c06434b071764e20d913001f71d97fadd232/diff:/var/lib/docker/overlay2/97a4d1655b4f47448f2f200a6b8f150e8f2960d0d6ff2b0920fd238d9fdc2c31/diff:/var/lib/docker/overlay2/15df85038e2f3436e3b23a6a35b84dcfaf3a735e506bc5af660c42519ede298b/diff:/var/lib/docker/overlay2/f29a318a8cfae29d19562dd7912e063084b1d321d8ea83b99f2808e363cec6bc/diff:/var/lib/docker/overlay2/73ecd3a5605dfc1ae938831bd261835b5bb3bf460857b84c0fbdb5ffcb290ea4/diff:/var/lib/docker/overlay2/949f2d40b73ae371ac4e7c81ef706a01da68e0a57145f13a3fb86c7eced257ef/diff:/var/lib/docker/overlay2/8d25550160c88d6c241f448420dd26daecce6bec8f774f2856a177a168ce3fe6/diff:/var/lib/docker/overlay2/27cbe8818217798c2761338718966cd435aaffff19e407bc5f20e21a831
c0172/diff:/var/lib/docker/overlay2/a8f41e83c2e19c1acaeb75ef0ef6daafe8f0c5675eb7a992ea4ad209f87b46b2/diff:/var/lib/docker/overlay2/4f127e69080651067a861bb1f9bbd08f2f57f6e05be509454e3e2a0cb0ecb178/diff:/var/lib/docker/overlay2/8bb03066bbd99667f78fb7ff8ed0939f8b06292372682c8f4a89d827588f18e6/diff:/var/lib/docker/overlay2/73261e58d3c16db540f287c0ddcdf6f3c4b9c869786e4e7a661931de7d55843e/diff:/var/lib/docker/overlay2/d48b7bafe3c2c5c869e17e7b043f3b4a5e5a13904f8fee77e9c429d43728fca9/diff:/var/lib/docker/overlay2/2e7b5043b64f757d5a308975d9ad9a451757a9fa450a726ce95e73347c79827a/diff:/var/lib/docker/overlay2/e8b366c628c74f57c66fd24385fa652cb7cfa81cec087f8ccec4ab98a6ae74d3/diff:/var/lib/docker/overlay2/3bb66a3fc586cafc4962828727dae244c9ee067ec0243f3f41f4e8fd1466ea80/diff:/var/lib/docker/overlay2/414633bd8851e03d3803cf3f8aa8c554a49cca39dff0d98db607dc81f318caea/diff:/var/lib/docker/overlay2/b2138b716615229ce59ff1ce8021afd5ca9d54aa64dfb7a928f137245788c9af/diff:/var/lib/docker/overlay2/51951ea2e125ce6991f056da1954df04375089
bd3c3897a92ee7e036a2a2e9ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f614959b226d12dfdf23c1d4533df275e0622c8ba710146822a431a8a3b3915b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-611000",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-611000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-611000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-611000",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-611000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1acac552c61643db7a08c83a0b8360ca99d618df3f68bbfa72e6d3ca0b181a4b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50516"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50517"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50518"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50519"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "50520"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1acac552c616",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-611000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "5ecc0ec0df28",
	                        "ingress-addon-legacy-611000"
	                    ],
	                    "NetworkID": "a358584ee8df7d60fb13eae2091bcaa5338550f8c195056d98256cfe40f5d4fd",
	                    "EndpointID": "5e375f01c44a5a74e70535bf5e6cf607e1ccd2ba5a826665e83f499e4287936e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-611000 -n ingress-addon-legacy-611000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-611000 -n ingress-addon-legacy-611000: exit status 6 (388.977181ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 12:51:37.745135    5684 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-611000" does not appear in /Users/jenkins/minikube-integration/15909-825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-611000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-899000 -- rollout status deployment/busybox: (3.983472512s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:496: expected 2 Pod IPs but got 1
multinode_test.go:503: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- nslookup kubernetes.io: exit status 1 (161.70896ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:513: Pod busybox-6b86dd6d48-8hfr6 could not resolve 'kubernetes.io': exit status 1
multinode_test.go:511: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-c2dqh -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- nslookup kubernetes.default: exit status 1 (156.475157ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:523: Pod busybox-6b86dd6d48-8hfr6 could not resolve 'kubernetes.default': exit status 1
multinode_test.go:521: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-c2dqh -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (155.991296ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:531: Pod busybox-6b86dd6d48-8hfr6 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
multinode_test.go:529: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-c2dqh -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-899000
helpers_test.go:235: (dbg) docker inspect multinode-899000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f",
	        "Created": "2023-02-23T20:57:05.198521017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 92358,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T20:57:05.479780897Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f/hostname",
	        "HostsPath": "/var/lib/docker/containers/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f/hosts",
	        "LogPath": "/var/lib/docker/containers/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f-json.log",
	        "Name": "/multinode-899000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-899000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-899000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/96aefa811ebdc7e464dcb2fd2281efacc0961351917459ef4b73631abe415e23-init/diff:/var/lib/docker/overlay2/8ec2612a0ddcb8334b31fa2e2bc600c6d5b9a8c44165b2b56481359e67f82632/diff:/var/lib/docker/overlay2/5a4fcd864af35524d91e9f03f7a3ee889f13eb86bb854aeb6e62c3838280d5fc/diff:/var/lib/docker/overlay2/ca9e0d5e9bddb9a2d473c37bab2ac5f9f184126f5fb6e4c745f3be8914c03532/diff:/var/lib/docker/overlay2/619c31ca980751eda08bd35f1a83d95b3063245da47b494f158d072021494f4c/diff:/var/lib/docker/overlay2/7d620f2b5b85f7324d49fb2708fb7d4f1db9ff6b108d4ca3c6e3f6e8898b3ccc/diff:/var/lib/docker/overlay2/4ddfbadfca4c3e934e23063eb72f0a8b496f080e58fde7b65d0d73fac442087a/diff:/var/lib/docker/overlay2/27b7006de0c1a19fcc1c6121cd2f4e901780b83b732ce0880bc790e4d703cca6/diff:/var/lib/docker/overlay2/db9789081d8550dc6534127eb8db4d8c036eb99ed233cd3b179dcdd2148a8383/diff:/var/lib/docker/overlay2/78c4cb6843b7d55ed4487f84ff898a18bd4cf5b3ed008c952adc374157e890e2/diff:/var/lib/docker/overlay2/03a217
ffcc58371b47ca0920df99dd665be045c23519c8cf9abab2bdab1c5054/diff:/var/lib/docker/overlay2/011d725b17aadc4eb439b621974c407496cba93a833556a743d66552c707c1dc/diff:/var/lib/docker/overlay2/0b008f9fc314f9c01e518f7460862c8547f3d93385956a53f28f98fcd75dadd6/diff:/var/lib/docker/overlay2/356adf5e7cf2a827d25ddea32416e1a9e7d00b4b0adba15e70b4851516eaf000/diff:/var/lib/docker/overlay2/c9670a6f6981744d99152f0dbb1d59bf038363e715ac12f11e6ac3afec9650e4/diff:/var/lib/docker/overlay2/ab49bf4c3150a4da37f8525728f9da7e0aaded3fe8a24f903933eacd72f241da/diff:/var/lib/docker/overlay2/384753914be6edc5df597f20420a7b590d74a58e09b4f7eea9d19f5ccd3a971d/diff:/var/lib/docker/overlay2/a055650e8b909c9a2df13d514e5fcc459a3456dbcc9bc4597740578105e5f705/diff:/var/lib/docker/overlay2/985a888024d5ed2ee945bf037da4836977930ed967631a6e18255471a7b729c4/diff:/var/lib/docker/overlay2/591f52d09d50d8870b1601d17c65c0767b1d2e1db18e67a25b132b849fea51b2/diff:/var/lib/docker/overlay2/e64bda0fa456ba46eaadd53b798f3bb3a7fb3e3956685834382f9aa1e7c905f9/diff:/var/lib/d
ocker/overlay2/f698a91600258430cf3c97106cbb6ffbbba5818713bca72a2aba46cf92255e27/diff:/var/lib/docker/overlay2/1323dd726fea756f28381ac36970e1171e467b330f1d43ed15be5a82f7d8a892/diff:/var/lib/docker/overlay2/9607967e3631ebbf10a2e397fc287ae0fbbed8fc54f3bf39da1d050a410bb255/diff:/var/lib/docker/overlay2/e12a332b82c5db56dbc7e53aaa44c06434b071764e20d913001f71d97fadd232/diff:/var/lib/docker/overlay2/97a4d1655b4f47448f2f200a6b8f150e8f2960d0d6ff2b0920fd238d9fdc2c31/diff:/var/lib/docker/overlay2/15df85038e2f3436e3b23a6a35b84dcfaf3a735e506bc5af660c42519ede298b/diff:/var/lib/docker/overlay2/f29a318a8cfae29d19562dd7912e063084b1d321d8ea83b99f2808e363cec6bc/diff:/var/lib/docker/overlay2/73ecd3a5605dfc1ae938831bd261835b5bb3bf460857b84c0fbdb5ffcb290ea4/diff:/var/lib/docker/overlay2/949f2d40b73ae371ac4e7c81ef706a01da68e0a57145f13a3fb86c7eced257ef/diff:/var/lib/docker/overlay2/8d25550160c88d6c241f448420dd26daecce6bec8f774f2856a177a168ce3fe6/diff:/var/lib/docker/overlay2/27cbe8818217798c2761338718966cd435aaffff19e407bc5f20e21a831
c0172/diff:/var/lib/docker/overlay2/a8f41e83c2e19c1acaeb75ef0ef6daafe8f0c5675eb7a992ea4ad209f87b46b2/diff:/var/lib/docker/overlay2/4f127e69080651067a861bb1f9bbd08f2f57f6e05be509454e3e2a0cb0ecb178/diff:/var/lib/docker/overlay2/8bb03066bbd99667f78fb7ff8ed0939f8b06292372682c8f4a89d827588f18e6/diff:/var/lib/docker/overlay2/73261e58d3c16db540f287c0ddcdf6f3c4b9c869786e4e7a661931de7d55843e/diff:/var/lib/docker/overlay2/d48b7bafe3c2c5c869e17e7b043f3b4a5e5a13904f8fee77e9c429d43728fca9/diff:/var/lib/docker/overlay2/2e7b5043b64f757d5a308975d9ad9a451757a9fa450a726ce95e73347c79827a/diff:/var/lib/docker/overlay2/e8b366c628c74f57c66fd24385fa652cb7cfa81cec087f8ccec4ab98a6ae74d3/diff:/var/lib/docker/overlay2/3bb66a3fc586cafc4962828727dae244c9ee067ec0243f3f41f4e8fd1466ea80/diff:/var/lib/docker/overlay2/414633bd8851e03d3803cf3f8aa8c554a49cca39dff0d98db607dc81f318caea/diff:/var/lib/docker/overlay2/b2138b716615229ce59ff1ce8021afd5ca9d54aa64dfb7a928f137245788c9af/diff:/var/lib/docker/overlay2/51951ea2e125ce6991f056da1954df04375089
bd3c3897a92ee7e036a2a2e9ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/96aefa811ebdc7e464dcb2fd2281efacc0961351917459ef4b73631abe415e23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/96aefa811ebdc7e464dcb2fd2281efacc0961351917459ef4b73631abe415e23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/96aefa811ebdc7e464dcb2fd2281efacc0961351917459ef4b73631abe415e23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-899000",
	                "Source": "/var/lib/docker/volumes/multinode-899000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-899000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-899000",
	                "name.minikube.sigs.k8s.io": "multinode-899000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0893f49b2ebfbeed4d6531f12da7aa861b3f27403ce22ff5a3d269959ecb30a2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51103"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51104"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0893f49b2ebf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-899000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d420670bd4c5",
	                        "multinode-899000"
	                    ],
	                    "NetworkID": "74907d76fcbca3db0a3e224115a644eb0ad70a95bb2c54a24a34566f5665c6c8",
	                    "EndpointID": "92f617f7866f5839016390d26b6d715bd579262d63ac86cfca24748d985df14f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-899000 -n multinode-899000
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-899000 logs -n 25: (2.355999433s)
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p second-188000                                  | second-188000        | jenkins | v1.29.0 | 23 Feb 23 12:55 PST | 23 Feb 23 12:56 PST |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| delete  | -p second-188000                                  | second-188000        | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| delete  | -p first-186000                                   | first-186000         | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| start   | -p mount-start-1-354000                           | mount-start-1-354000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46464                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-1-354000 ssh -- ls                    | mount-start-1-354000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| start   | -p mount-start-2-367000                           | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-367000 ssh -- ls                    | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-354000                           | mount-start-1-354000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-367000 ssh -- ls                    | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-367000                           | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| start   | -p mount-start-2-367000                           | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| ssh     | mount-start-2-367000 ssh -- ls                    | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-367000                           | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| delete  | -p mount-start-1-354000                           | mount-start-1-354000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| start   | -p multinode-899000                               | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:58 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- apply -f                   | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- rollout                    | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- get pods -o                | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- get pods -o                | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST |                     |
	|         | busybox-6b86dd6d48-8hfr6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-c2dqh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST |                     |
	|         | busybox-6b86dd6d48-8hfr6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-c2dqh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST |                     |
	|         | busybox-6b86dd6d48-8hfr6 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-c2dqh -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 12:56:57
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 12:56:57.258012    7621 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:56:57.258168    7621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:56:57.258173    7621 out.go:309] Setting ErrFile to fd 2...
	I0223 12:56:57.258177    7621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:56:57.258290    7621 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 12:56:57.259624    7621 out.go:303] Setting JSON to false
	I0223 12:56:57.278075    7621 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1592,"bootTime":1677184225,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 12:56:57.278200    7621 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 12:56:57.299385    7621 out.go:177] * [multinode-899000] minikube v1.29.0 on Darwin 13.2
	I0223 12:56:57.341708    7621 notify.go:220] Checking for updates...
	I0223 12:56:57.363236    7621 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 12:56:57.384243    7621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:56:57.405392    7621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 12:56:57.426199    7621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 12:56:57.447257    7621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 12:56:57.468460    7621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 12:56:57.489569    7621 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 12:56:57.551685    7621 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 12:56:57.551814    7621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:56:57.692723    7621 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 20:56:57.600524656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:56:57.736150    7621 out.go:177] * Using the docker driver based on user configuration
	I0223 12:56:57.757142    7621 start.go:296] selected driver: docker
	I0223 12:56:57.757169    7621 start.go:857] validating driver "docker" against <nil>
	I0223 12:56:57.757185    7621 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 12:56:57.761124    7621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:56:57.902178    7621 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 20:56:57.810277225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:56:57.902283    7621 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 12:56:57.902491    7621 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 12:56:57.924266    7621 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 12:56:57.945893    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:56:57.945920    7621 cni.go:136] 0 nodes found, recommending kindnet
	I0223 12:56:57.945930    7621 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 12:56:57.945950    7621 start_flags.go:319] config:
	{Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:56:57.967764    7621 out.go:177] * Starting control plane node multinode-899000 in cluster multinode-899000
	I0223 12:56:57.989030    7621 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 12:56:58.010848    7621 out.go:177] * Pulling base image ...
	I0223 12:56:58.052993    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:56:58.053051    7621 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 12:56:58.053104    7621 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 12:56:58.053127    7621 cache.go:57] Caching tarball of preloaded images
	I0223 12:56:58.053350    7621 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 12:56:58.053369    7621 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 12:56:58.055755    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:56:58.055813    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json: {Name:mk6af36b0687a54554dd5acaa8f5c9b1d8730d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:56:58.109286    7621 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 12:56:58.109305    7621 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 12:56:58.109324    7621 cache.go:193] Successfully downloaded all kic artifacts
	I0223 12:56:58.109367    7621 start.go:364] acquiring machines lock for multinode-899000: {Name:mk988186d61e0f5195c5933755c16d9cd5d267e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 12:56:58.109526    7621 start.go:368] acquired machines lock for "multinode-899000" in 147.42µs
	I0223 12:56:58.109558    7621 start.go:93] Provisioning new machine with config: &{Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 12:56:58.109633    7621 start.go:125] createHost starting for "" (driver="docker")
	I0223 12:56:58.131781    7621 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 12:56:58.132168    7621 start.go:159] libmachine.API.Create for "multinode-899000" (driver="docker")
	I0223 12:56:58.132217    7621 client.go:168] LocalClient.Create starting
	I0223 12:56:58.132396    7621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 12:56:58.132487    7621 main.go:141] libmachine: Decoding PEM data...
	I0223 12:56:58.132520    7621 main.go:141] libmachine: Parsing certificate...
	I0223 12:56:58.132642    7621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 12:56:58.132706    7621 main.go:141] libmachine: Decoding PEM data...
	I0223 12:56:58.132723    7621 main.go:141] libmachine: Parsing certificate...
	I0223 12:56:58.133586    7621 cli_runner.go:164] Run: docker network inspect multinode-899000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 12:56:58.187512    7621 cli_runner.go:211] docker network inspect multinode-899000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 12:56:58.187618    7621 network_create.go:281] running [docker network inspect multinode-899000] to gather additional debugging logs...
	I0223 12:56:58.187637    7621 cli_runner.go:164] Run: docker network inspect multinode-899000
	W0223 12:56:58.240550    7621 cli_runner.go:211] docker network inspect multinode-899000 returned with exit code 1
	I0223 12:56:58.240580    7621 network_create.go:284] error running [docker network inspect multinode-899000]: docker network inspect multinode-899000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-899000
	I0223 12:56:58.240592    7621 network_create.go:286] output of [docker network inspect multinode-899000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-899000
	
	** /stderr **
	I0223 12:56:58.240685    7621 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 12:56:58.295025    7621 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 12:56:58.295363    7621 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e50320}
	I0223 12:56:58.295376    7621 network_create.go:123] attempt to create docker network multinode-899000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 12:56:58.295453    7621 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-899000 multinode-899000
	I0223 12:56:58.380688    7621 network_create.go:107] docker network multinode-899000 192.168.58.0/24 created
	I0223 12:56:58.380730    7621 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-899000" container
	I0223 12:56:58.380855    7621 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 12:56:58.435590    7621 cli_runner.go:164] Run: docker volume create multinode-899000 --label name.minikube.sigs.k8s.io=multinode-899000 --label created_by.minikube.sigs.k8s.io=true
	I0223 12:56:58.489652    7621 oci.go:103] Successfully created a docker volume multinode-899000
	I0223 12:56:58.489796    7621 cli_runner.go:164] Run: docker run --rm --name multinode-899000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-899000 --entrypoint /usr/bin/test -v multinode-899000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 12:56:58.925960    7621 oci.go:107] Successfully prepared a docker volume multinode-899000
	I0223 12:56:58.926003    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:56:58.926018    7621 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 12:56:58.926123    7621 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-899000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 12:57:05.001316    7621 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-899000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.074998075s)
	I0223 12:57:05.001336    7621 kic.go:199] duration metric: took 6.075208 seconds to extract preloaded images to volume
	I0223 12:57:05.001454    7621 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 12:57:05.144571    7621 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-899000 --name multinode-899000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-899000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-899000 --network multinode-899000 --ip 192.168.58.2 --volume multinode-899000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 12:57:05.488917    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Running}}
	I0223 12:57:05.549239    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:05.607583    7621 cli_runner.go:164] Run: docker exec multinode-899000 stat /var/lib/dpkg/alternatives/iptables
	I0223 12:57:05.721407    7621 oci.go:144] the created container "multinode-899000" has a running status.
	I0223 12:57:05.721438    7621 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa...
	I0223 12:57:05.882882    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 12:57:05.882954    7621 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 12:57:05.987239    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:06.045505    7621 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 12:57:06.045526    7621 kic_runner.go:114] Args: [docker exec --privileged multinode-899000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 12:57:06.149979    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:06.206338    7621 machine.go:88] provisioning docker machine ...
	I0223 12:57:06.206382    7621 ubuntu.go:169] provisioning hostname "multinode-899000"
	I0223 12:57:06.206467    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:06.263261    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:06.263642    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:06.263655    7621 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899000 && echo "multinode-899000" | sudo tee /etc/hostname
	I0223 12:57:06.407667    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899000
	
	I0223 12:57:06.407755    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:06.466148    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:06.466522    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:06.466535    7621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 12:57:06.601336    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 12:57:06.601357    7621 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-825/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-825/.minikube}
	I0223 12:57:06.601376    7621 ubuntu.go:177] setting up certificates
	I0223 12:57:06.601384    7621 provision.go:83] configureAuth start
	I0223 12:57:06.601457    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000
	I0223 12:57:06.657696    7621 provision.go:138] copyHostCerts
	I0223 12:57:06.657743    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:57:06.657799    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem, removing ...
	I0223 12:57:06.657806    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:57:06.657904    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem (1078 bytes)
	I0223 12:57:06.658084    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:57:06.658117    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem, removing ...
	I0223 12:57:06.658122    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:57:06.658187    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem (1123 bytes)
	I0223 12:57:06.658315    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:57:06.658350    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem, removing ...
	I0223 12:57:06.658354    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:57:06.658415    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem (1675 bytes)
	I0223 12:57:06.658543    7621 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem org=jenkins.multinode-899000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-899000]
	I0223 12:57:06.714310    7621 provision.go:172] copyRemoteCerts
	I0223 12:57:06.714361    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 12:57:06.714408    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:06.770366    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:06.865632    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 12:57:06.865730    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 12:57:06.882924    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 12:57:06.883003    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 12:57:06.899760    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 12:57:06.899840    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 12:57:06.916716    7621 provision.go:86] duration metric: configureAuth took 315.312867ms
	I0223 12:57:06.916731    7621 ubuntu.go:193] setting minikube options for container-runtime
	I0223 12:57:06.916881    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:57:06.916949    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:06.973133    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:06.973483    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:06.973497    7621 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 12:57:07.105173    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 12:57:07.105200    7621 ubuntu.go:71] root file system type: overlay
	I0223 12:57:07.105292    7621 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 12:57:07.105391    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:07.161583    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:07.161943    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:07.161992    7621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 12:57:07.305650    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 12:57:07.305759    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:07.362739    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:07.363094    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:07.363107    7621 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 12:57:07.968160    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 20:57:07.303656908 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 12:57:07.968185    7621 machine.go:91] provisioned docker machine in 1.76179481s
	I0223 12:57:07.968191    7621 client.go:171] LocalClient.Create took 9.835787482s
	I0223 12:57:07.968223    7621 start.go:167] duration metric: libmachine.API.Create for "multinode-899000" took 9.835875729s
	I0223 12:57:07.968237    7621 start.go:300] post-start starting for "multinode-899000" (driver="docker")
	I0223 12:57:07.968242    7621 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 12:57:07.968317    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 12:57:07.968377    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:08.025072    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:08.119943    7621 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 12:57:08.123557    7621 command_runner.go:130] > NAME="Ubuntu"
	I0223 12:57:08.123568    7621 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 12:57:08.123572    7621 command_runner.go:130] > ID=ubuntu
	I0223 12:57:08.123586    7621 command_runner.go:130] > ID_LIKE=debian
	I0223 12:57:08.123592    7621 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 12:57:08.123596    7621 command_runner.go:130] > VERSION_ID="20.04"
	I0223 12:57:08.123603    7621 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 12:57:08.123608    7621 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 12:57:08.123612    7621 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 12:57:08.123622    7621 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 12:57:08.123626    7621 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 12:57:08.123630    7621 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 12:57:08.123685    7621 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 12:57:08.123697    7621 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 12:57:08.123704    7621 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 12:57:08.123710    7621 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 12:57:08.123720    7621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/addons for local assets ...
	I0223 12:57:08.123819    7621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/files for local assets ...
	I0223 12:57:08.123992    7621 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> 20572.pem in /etc/ssl/certs
	I0223 12:57:08.123999    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /etc/ssl/certs/20572.pem
	I0223 12:57:08.124181    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 12:57:08.131664    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:57:08.148562    7621 start.go:303] post-start completed in 180.312213ms
	I0223 12:57:08.149075    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000
	I0223 12:57:08.205082    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:57:08.205495    7621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 12:57:08.205549    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:08.261424    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:08.354879    7621 command_runner.go:130] > 5%!
	(MISSING)I0223 12:57:08.354999    7621 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 12:57:08.359252    7621 command_runner.go:130] > 100G
	I0223 12:57:08.359532    7621 start.go:128] duration metric: createHost completed in 10.249707203s
	I0223 12:57:08.359567    7621 start.go:83] releasing machines lock for "multinode-899000", held for 10.249845411s
	I0223 12:57:08.359668    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000
	I0223 12:57:08.415149    7621 ssh_runner.go:195] Run: cat /version.json
	I0223 12:57:08.415178    7621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 12:57:08.415220    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:08.415244    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:08.474888    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:08.475075    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:08.624757    7621 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 12:57:08.626732    7621 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0223 12:57:08.626894    7621 ssh_runner.go:195] Run: systemctl --version
	I0223 12:57:08.631474    7621 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0223 12:57:08.631497    7621 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0223 12:57:08.631585    7621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 12:57:08.636472    7621 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 12:57:08.636484    7621 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 12:57:08.636489    7621 command_runner.go:130] > Device: a6h/166d	Inode: 2229761     Links: 1
	I0223 12:57:08.636494    7621 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 12:57:08.636501    7621 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 12:57:08.636505    7621 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 12:57:08.636509    7621 command_runner.go:130] > Change: 2023-02-23 20:33:52.692471760 +0000
	I0223 12:57:08.636513    7621 command_runner.go:130] >  Birth: -
	I0223 12:57:08.636573    7621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 12:57:08.656127    7621 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 12:57:08.656200    7621 ssh_runner.go:195] Run: which cri-dockerd
	I0223 12:57:08.659899    7621 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 12:57:08.660110    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 12:57:08.667420    7621 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 12:57:08.679972    7621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 12:57:08.694467    7621 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 12:57:08.694510    7621 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 12:57:08.694522    7621 start.go:485] detecting cgroup driver to use...
	I0223 12:57:08.694533    7621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:57:08.694603    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:57:08.706589    7621 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 12:57:08.706601    7621 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 12:57:08.707395    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 12:57:08.715692    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 12:57:08.724063    7621 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 12:57:08.724120    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 12:57:08.732428    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:57:08.740746    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 12:57:08.749220    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:57:08.757697    7621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 12:57:08.765528    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 12:57:08.773699    7621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 12:57:08.780158    7621 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 12:57:08.780789    7621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 12:57:08.787632    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:57:08.855416    7621 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 12:57:08.931306    7621 start.go:485] detecting cgroup driver to use...
	I0223 12:57:08.931326    7621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:57:08.931389    7621 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 12:57:08.940671    7621 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 12:57:08.940905    7621 command_runner.go:130] > [Unit]
	I0223 12:57:08.940913    7621 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 12:57:08.940918    7621 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 12:57:08.940922    7621 command_runner.go:130] > BindsTo=containerd.service
	I0223 12:57:08.940927    7621 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 12:57:08.940931    7621 command_runner.go:130] > Wants=network-online.target
	I0223 12:57:08.940939    7621 command_runner.go:130] > Requires=docker.socket
	I0223 12:57:08.940943    7621 command_runner.go:130] > StartLimitBurst=3
	I0223 12:57:08.940947    7621 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 12:57:08.940950    7621 command_runner.go:130] > [Service]
	I0223 12:57:08.940955    7621 command_runner.go:130] > Type=notify
	I0223 12:57:08.940958    7621 command_runner.go:130] > Restart=on-failure
	I0223 12:57:08.940964    7621 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 12:57:08.940972    7621 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 12:57:08.940979    7621 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 12:57:08.940986    7621 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 12:57:08.940995    7621 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 12:57:08.941009    7621 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 12:57:08.941017    7621 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 12:57:08.941029    7621 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 12:57:08.941039    7621 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 12:57:08.941042    7621 command_runner.go:130] > ExecStart=
	I0223 12:57:08.941055    7621 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 12:57:08.941060    7621 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 12:57:08.941065    7621 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 12:57:08.941071    7621 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 12:57:08.941076    7621 command_runner.go:130] > LimitNOFILE=infinity
	I0223 12:57:08.941079    7621 command_runner.go:130] > LimitNPROC=infinity
	I0223 12:57:08.941087    7621 command_runner.go:130] > LimitCORE=infinity
	I0223 12:57:08.941092    7621 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 12:57:08.941096    7621 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 12:57:08.941101    7621 command_runner.go:130] > TasksMax=infinity
	I0223 12:57:08.941104    7621 command_runner.go:130] > TimeoutStartSec=0
	I0223 12:57:08.941111    7621 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 12:57:08.941115    7621 command_runner.go:130] > Delegate=yes
	I0223 12:57:08.941120    7621 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 12:57:08.941123    7621 command_runner.go:130] > KillMode=process
	I0223 12:57:08.941130    7621 command_runner.go:130] > [Install]
	I0223 12:57:08.941134    7621 command_runner.go:130] > WantedBy=multi-user.target
	I0223 12:57:08.941612    7621 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 12:57:08.941679    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 12:57:08.951806    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:57:08.965460    7621 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 12:57:08.965473    7621 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 12:57:08.966196    7621 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 12:57:09.061007    7621 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 12:57:09.148581    7621 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 12:57:09.148598    7621 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 12:57:09.161847    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:57:09.249710    7621 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 12:57:09.464408    7621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 12:57:09.531667    7621 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 12:57:09.531735    7621 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 12:57:09.595616    7621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 12:57:09.663989    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:57:09.732666    7621 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 12:57:09.752610    7621 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 12:57:09.752690    7621 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 12:57:09.756712    7621 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 12:57:09.756723    7621 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 12:57:09.756728    7621 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0223 12:57:09.756743    7621 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 12:57:09.756749    7621 command_runner.go:130] > Access: 2023-02-23 20:57:09.740656885 +0000
	I0223 12:57:09.756754    7621 command_runner.go:130] > Modify: 2023-02-23 20:57:09.740656885 +0000
	I0223 12:57:09.756759    7621 command_runner.go:130] > Change: 2023-02-23 20:57:09.749656885 +0000
	I0223 12:57:09.756762    7621 command_runner.go:130] >  Birth: -
	I0223 12:57:09.756782    7621 start.go:553] Will wait 60s for crictl version
	I0223 12:57:09.756820    7621 ssh_runner.go:195] Run: which crictl
	I0223 12:57:09.760384    7621 command_runner.go:130] > /usr/bin/crictl
	I0223 12:57:09.760508    7621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 12:57:09.851511    7621 command_runner.go:130] > Version:  0.1.0
	I0223 12:57:09.851524    7621 command_runner.go:130] > RuntimeName:  docker
	I0223 12:57:09.851528    7621 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 12:57:09.851532    7621 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 12:57:09.853615    7621 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 12:57:09.853687    7621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:57:09.876935    7621 command_runner.go:130] > 23.0.1
	I0223 12:57:09.878452    7621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:57:09.900755    7621 command_runner.go:130] > 23.0.1
	I0223 12:57:09.948673    7621 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 12:57:09.948832    7621 cli_runner.go:164] Run: docker exec -t multinode-899000 dig +short host.docker.internal
	I0223 12:57:10.057868    7621 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 12:57:10.057985    7621 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 12:57:10.062491    7621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:57:10.072388    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:10.130751    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:57:10.130843    7621 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 12:57:10.148841    7621 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 12:57:10.148854    7621 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 12:57:10.148859    7621 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 12:57:10.148866    7621 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 12:57:10.148871    7621 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 12:57:10.148874    7621 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 12:57:10.148879    7621 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 12:57:10.148888    7621 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 12:57:10.150542    7621 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 12:57:10.150557    7621 docker.go:560] Images already preloaded, skipping extraction
	I0223 12:57:10.150644    7621 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 12:57:10.169266    7621 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 12:57:10.169279    7621 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 12:57:10.169283    7621 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 12:57:10.169291    7621 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 12:57:10.169297    7621 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 12:57:10.169302    7621 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 12:57:10.169307    7621 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 12:57:10.169321    7621 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 12:57:10.170844    7621 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 12:57:10.170854    7621 cache_images.go:84] Images are preloaded, skipping loading
	I0223 12:57:10.170947    7621 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 12:57:10.194294    7621 command_runner.go:130] > cgroupfs
	I0223 12:57:10.195997    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:57:10.196009    7621 cni.go:136] 1 nodes found, recommending kindnet
	I0223 12:57:10.196026    7621 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 12:57:10.196044    7621 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899000 NodeName:multinode-899000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 12:57:10.196163    7621 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-899000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 12:57:10.196244    7621 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-899000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 12:57:10.196318    7621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 12:57:10.203463    7621 command_runner.go:130] > kubeadm
	I0223 12:57:10.203471    7621 command_runner.go:130] > kubectl
	I0223 12:57:10.203475    7621 command_runner.go:130] > kubelet
	I0223 12:57:10.204064    7621 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 12:57:10.204120    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 12:57:10.211323    7621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0223 12:57:10.223788    7621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 12:57:10.236358    7621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0223 12:57:10.248986    7621 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 12:57:10.252904    7621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:57:10.262635    7621 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000 for IP: 192.168.58.2
	I0223 12:57:10.262654    7621 certs.go:186] acquiring lock for shared ca certs: {Name:mk9b7a98958f4333f06cfa6d87963d4d7f2b94cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.262839    7621 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key
	I0223 12:57:10.262905    7621 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key
	I0223 12:57:10.262951    7621 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key
	I0223 12:57:10.262964    7621 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt with IP's: []
	I0223 12:57:10.322657    7621 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt ...
	I0223 12:57:10.322666    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt: {Name:mk230eb0789e348d7769aaa30562130e292016de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.322950    7621 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key ...
	I0223 12:57:10.322957    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key: {Name:mk81e6b74a5e9dc1cb3968aba8a3f96d82eec2bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.323154    7621 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key.cee25041
	I0223 12:57:10.323173    7621 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 12:57:10.396692    7621 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt.cee25041 ...
	I0223 12:57:10.396700    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt.cee25041: {Name:mk8f8abf41e20371cfca65b1f7d3d17c53f40fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.396905    7621 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key.cee25041 ...
	I0223 12:57:10.396914    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key.cee25041: {Name:mk324908247e1988bfa2dea311b4e9ad6bbd9ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.397093    7621 certs.go:333] copying /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt
	I0223 12:57:10.397249    7621 certs.go:337] copying /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key
	I0223 12:57:10.397410    7621 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key
	I0223 12:57:10.397424    7621 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt with IP's: []
	I0223 12:57:10.612767    7621 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt ...
	I0223 12:57:10.612776    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt: {Name:mkdec6c6a484a2eaf518126dea9253068b149693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.612992    7621 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key ...
	I0223 12:57:10.613000    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key: {Name:mk364443d14fe67da4fb43f9103d14289df59b0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.613172    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 12:57:10.613200    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 12:57:10.613219    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 12:57:10.613238    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 12:57:10.613259    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 12:57:10.613277    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 12:57:10.613295    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 12:57:10.613320    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 12:57:10.613411    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem (1338 bytes)
	W0223 12:57:10.613457    7621 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057_empty.pem, impossibly tiny 0 bytes
	I0223 12:57:10.613467    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 12:57:10.613498    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem (1078 bytes)
	I0223 12:57:10.613531    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem (1123 bytes)
	I0223 12:57:10.613565    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem (1675 bytes)
	I0223 12:57:10.613634    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:57:10.613668    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem -> /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.613687    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.613708    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.614190    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 12:57:10.632844    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 12:57:10.649894    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 12:57:10.666693    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 12:57:10.683665    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 12:57:10.700539    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 12:57:10.717410    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 12:57:10.734188    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 12:57:10.751234    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem --> /usr/share/ca-certificates/2057.pem (1338 bytes)
	I0223 12:57:10.768201    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /usr/share/ca-certificates/20572.pem (1708 bytes)
	I0223 12:57:10.784911    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 12:57:10.801764    7621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 12:57:10.814485    7621 ssh_runner.go:195] Run: openssl version
	I0223 12:57:10.819597    7621 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 12:57:10.819916    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20572.pem && ln -fs /usr/share/ca-certificates/20572.pem /etc/ssl/certs/20572.pem"
	I0223 12:57:10.827936    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.831680    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.832204    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.832306    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.837895    7621 command_runner.go:130] > 3ec20f2e
	I0223 12:57:10.838093    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20572.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 12:57:10.845936    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 12:57:10.853817    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.857662    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.857948    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.858019    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.863075    7621 command_runner.go:130] > b5213941
	I0223 12:57:10.863486    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 12:57:10.871544    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2057.pem && ln -fs /usr/share/ca-certificates/2057.pem /etc/ssl/certs/2057.pem"
	I0223 12:57:10.879340    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.883115    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.883178    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.883222    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.888180    7621 command_runner.go:130] > 51391683
	I0223 12:57:10.888436    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2057.pem /etc/ssl/certs/51391683.0"
	I0223 12:57:10.896145    7621 kubeadm.go:401] StartCluster: {Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:57:10.896244    7621 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 12:57:10.915902    7621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 12:57:10.923723    7621 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0223 12:57:10.923735    7621 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0223 12:57:10.923740    7621 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0223 12:57:10.923802    7621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 12:57:10.931232    7621 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 12:57:10.931302    7621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 12:57:10.939026    7621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 12:57:10.939042    7621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 12:57:10.939048    7621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 12:57:10.939055    7621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 12:57:10.939078    7621 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 12:57:10.939099    7621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 12:57:10.987638    7621 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 12:57:10.987645    7621 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0223 12:57:10.987688    7621 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 12:57:10.987699    7621 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 12:57:11.093412    7621 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 12:57:11.093422    7621 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 12:57:11.093496    7621 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 12:57:11.093503    7621 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 12:57:11.093587    7621 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 12:57:11.093599    7621 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 12:57:11.221589    7621 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 12:57:11.221639    7621 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 12:57:11.263803    7621 out.go:204]   - Generating certificates and keys ...
	I0223 12:57:11.263920    7621 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 12:57:11.263935    7621 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 12:57:11.264003    7621 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 12:57:11.264016    7621 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 12:57:11.408786    7621 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 12:57:11.408793    7621 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 12:57:11.471993    7621 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 12:57:11.472006    7621 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0223 12:57:11.554812    7621 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 12:57:11.554823    7621 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0223 12:57:11.641056    7621 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 12:57:11.641070    7621 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0223 12:57:11.707218    7621 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 12:57:11.707228    7621 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0223 12:57:11.707346    7621 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 12:57:11.707359    7621 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 12:57:11.795926    7621 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 12:57:11.795933    7621 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0223 12:57:11.796051    7621 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 12:57:11.796060    7621 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 12:57:11.931956    7621 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 12:57:11.931968    7621 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 12:57:12.191982    7621 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 12:57:12.192004    7621 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 12:57:12.255363    7621 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 12:57:12.255372    7621 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0223 12:57:12.255424    7621 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 12:57:12.255433    7621 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 12:57:12.469451    7621 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 12:57:12.469462    7621 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 12:57:12.702193    7621 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 12:57:12.702205    7621 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 12:57:12.921511    7621 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 12:57:12.921528    7621 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 12:57:12.968259    7621 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 12:57:12.968269    7621 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 12:57:12.978609    7621 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 12:57:12.978618    7621 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 12:57:12.979200    7621 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 12:57:12.979222    7621 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 12:57:12.979261    7621 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 12:57:12.979268    7621 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 12:57:13.054089    7621 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 12:57:13.054114    7621 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 12:57:13.075671    7621 out.go:204]   - Booting up control plane ...
	I0223 12:57:13.075775    7621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 12:57:13.075783    7621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 12:57:13.075865    7621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 12:57:13.075872    7621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 12:57:13.075931    7621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 12:57:13.075945    7621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 12:57:13.076052    7621 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 12:57:13.076059    7621 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 12:57:13.076171    7621 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 12:57:13.076175    7621 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 12:57:21.560293    7621 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.501670 seconds
	I0223 12:57:21.560317    7621 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.501670 seconds
	I0223 12:57:21.560447    7621 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 12:57:21.560459    7621 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 12:57:21.568305    7621 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 12:57:21.568320    7621 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 12:57:22.082460    7621 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 12:57:22.082473    7621 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0223 12:57:22.082624    7621 kubeadm.go:322] [mark-control-plane] Marking the node multinode-899000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 12:57:22.082634    7621 command_runner.go:130] > [mark-control-plane] Marking the node multinode-899000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 12:57:22.589892    7621 kubeadm.go:322] [bootstrap-token] Using token: ybgu28.y4z8wg7gwd9t6sqw
	I0223 12:57:22.589930    7621 command_runner.go:130] > [bootstrap-token] Using token: ybgu28.y4z8wg7gwd9t6sqw
	I0223 12:57:22.611477    7621 out.go:204]   - Configuring RBAC rules ...
	I0223 12:57:22.611583    7621 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 12:57:22.611589    7621 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 12:57:22.652250    7621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 12:57:22.652265    7621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 12:57:22.657025    7621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 12:57:22.657035    7621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 12:57:22.659055    7621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 12:57:22.659061    7621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 12:57:22.661421    7621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 12:57:22.661434    7621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 12:57:22.663356    7621 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 12:57:22.663366    7621 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 12:57:22.670969    7621 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 12:57:22.670982    7621 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 12:57:22.814755    7621 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 12:57:22.814772    7621 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 12:57:23.055839    7621 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 12:57:23.055856    7621 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 12:57:23.056216    7621 kubeadm.go:322] 
	I0223 12:57:23.056299    7621 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 12:57:23.056315    7621 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0223 12:57:23.056342    7621 kubeadm.go:322] 
	I0223 12:57:23.056407    7621 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 12:57:23.056416    7621 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0223 12:57:23.056420    7621 kubeadm.go:322] 
	I0223 12:57:23.056465    7621 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 12:57:23.056484    7621 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0223 12:57:23.056553    7621 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 12:57:23.056565    7621 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 12:57:23.056636    7621 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 12:57:23.056650    7621 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 12:57:23.056659    7621 kubeadm.go:322] 
	I0223 12:57:23.056790    7621 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 12:57:23.056797    7621 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0223 12:57:23.056806    7621 kubeadm.go:322] 
	I0223 12:57:23.056858    7621 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 12:57:23.056867    7621 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 12:57:23.056879    7621 kubeadm.go:322] 
	I0223 12:57:23.056964    7621 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 12:57:23.056973    7621 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0223 12:57:23.057024    7621 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 12:57:23.057029    7621 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 12:57:23.057072    7621 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 12:57:23.057077    7621 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 12:57:23.057082    7621 kubeadm.go:322] 
	I0223 12:57:23.057168    7621 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 12:57:23.057175    7621 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0223 12:57:23.057235    7621 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 12:57:23.057240    7621 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0223 12:57:23.057243    7621 kubeadm.go:322] 
	I0223 12:57:23.057331    7621 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ybgu28.y4z8wg7gwd9t6sqw \
	I0223 12:57:23.057340    7621 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ybgu28.y4z8wg7gwd9t6sqw \
	I0223 12:57:23.057472    7621 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 \
	I0223 12:57:23.057480    7621 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 \
	I0223 12:57:23.057500    7621 kubeadm.go:322] 	--control-plane 
	I0223 12:57:23.057512    7621 command_runner.go:130] > 	--control-plane 
	I0223 12:57:23.057539    7621 kubeadm.go:322] 
	I0223 12:57:23.057609    7621 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 12:57:23.057619    7621 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0223 12:57:23.057639    7621 kubeadm.go:322] 
	I0223 12:57:23.057715    7621 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ybgu28.y4z8wg7gwd9t6sqw \
	I0223 12:57:23.057721    7621 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ybgu28.y4z8wg7gwd9t6sqw \
	I0223 12:57:23.057859    7621 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 
	I0223 12:57:23.057871    7621 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 
	I0223 12:57:23.061202    7621 kubeadm.go:322] W0223 20:57:10.980367    1300 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 12:57:23.061234    7621 command_runner.go:130] ! W0223 20:57:10.980367    1300 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 12:57:23.061402    7621 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 12:57:23.061415    7621 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 12:57:23.061519    7621 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 12:57:23.061528    7621 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 12:57:23.061554    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:57:23.061565    7621 cni.go:136] 1 nodes found, recommending kindnet
	I0223 12:57:23.101101    7621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 12:57:23.138066    7621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 12:57:23.143657    7621 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 12:57:23.143672    7621 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 12:57:23.143677    7621 command_runner.go:130] > Device: a6h/166d	Inode: 2102733     Links: 1
	I0223 12:57:23.143681    7621 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 12:57:23.143693    7621 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 12:57:23.143699    7621 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 12:57:23.143703    7621 command_runner.go:130] > Change: 2023-02-23 20:33:51.991471766 +0000
	I0223 12:57:23.143706    7621 command_runner.go:130] >  Birth: -
	I0223 12:57:23.143729    7621 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 12:57:23.143735    7621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 12:57:23.157035    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 12:57:23.749482    7621 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0223 12:57:23.753013    7621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0223 12:57:23.758972    7621 command_runner.go:130] > serviceaccount/kindnet created
	I0223 12:57:23.765872    7621 command_runner.go:130] > daemonset.apps/kindnet created
	I0223 12:57:23.771409    7621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 12:57:23.771483    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:23.771497    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=7816f70daabe48630c945a757f21bf8d759fce7d minikube.k8s.io/name=multinode-899000 minikube.k8s.io/updated_at=2023_02_23T12_57_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:23.868977    7621 command_runner.go:130] > node/multinode-899000 labeled
	I0223 12:57:23.872482    7621 command_runner.go:130] > -16
	I0223 12:57:23.872515    7621 ops.go:34] apiserver oom_adj: -16
	I0223 12:57:23.872561    7621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0223 12:57:23.872663    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:23.937423    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:24.439650    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:24.504999    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:24.939688    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:25.004046    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:25.438090    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:25.501923    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:25.938865    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:26.001562    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:26.438455    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:26.503890    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:26.939517    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:27.004031    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:27.437863    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:27.500361    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:27.938317    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:28.002108    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:28.439927    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:28.504960    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:28.938536    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:29.002486    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:29.437806    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:29.502903    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:29.937739    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:30.002660    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:30.438090    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:30.502726    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:30.938104    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:31.000839    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:31.439249    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:31.503527    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:31.938436    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:31.999326    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:32.439988    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:32.504038    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:32.937939    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:33.003858    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:33.437817    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:33.501563    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:33.938475    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:34.002602    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:34.437957    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:34.501963    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:34.937861    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:35.000751    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:35.438256    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:35.502704    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:35.937865    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:36.036087    7621 command_runner.go:130] > NAME      SECRETS   AGE
	I0223 12:57:36.036102    7621 command_runner.go:130] > default   0         1s
	I0223 12:57:36.039705    7621 kubeadm.go:1073] duration metric: took 12.268065786s to wait for elevateKubeSystemPrivileges.
	I0223 12:57:36.039722    7621 kubeadm.go:403] StartCluster complete in 25.14312823s
	I0223 12:57:36.039740    7621 settings.go:142] acquiring lock: {Name:mkbd8676df55bd54ade697ff92726c4299ba6b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:36.039832    7621 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:57:36.040283    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/kubeconfig: {Name:mka45aca5add49860892d9e622eefcdfd6860a2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:36.040527    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 12:57:36.040554    7621 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 12:57:36.040618    7621 addons.go:65] Setting storage-provisioner=true in profile "multinode-899000"
	I0223 12:57:36.040623    7621 addons.go:65] Setting default-storageclass=true in profile "multinode-899000"
	I0223 12:57:36.040640    7621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-899000"
	I0223 12:57:36.040641    7621 addons.go:227] Setting addon storage-provisioner=true in "multinode-899000"
	I0223 12:57:36.040676    7621 host.go:66] Checking if "multinode-899000" exists ...
	I0223 12:57:36.040682    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:57:36.040739    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:57:36.040916    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:36.041014    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:36.040998    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:57:36.044746    7621 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 12:57:36.045004    7621 round_trippers.go:463] GET https://127.0.0.1:51104/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 12:57:36.045013    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.045024    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.045032    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.054436    7621 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0223 12:57:36.054461    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.054471    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.054479    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.054487    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.054495    7621 round_trippers.go:580]     Content-Length: 291
	I0223 12:57:36.054503    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.054511    7621 round_trippers.go:580]     Audit-Id: dd961c59-3599-402a-98cb-62fc19792a60
	I0223 12:57:36.054525    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.054558    7621 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"355","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 12:57:36.054961    7621 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"355","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 12:57:36.055000    7621 round_trippers.go:463] PUT https://127.0.0.1:51104/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 12:57:36.055005    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.055012    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.055019    7621 round_trippers.go:473]     Content-Type: application/json
	I0223 12:57:36.055027    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.060700    7621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 12:57:36.060725    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.060734    7621 round_trippers.go:580]     Audit-Id: e48c32a4-1372-443f-b55c-7f94a1ae5b6b
	I0223 12:57:36.060743    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.060770    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.060802    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.060827    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.060842    7621 round_trippers.go:580]     Content-Length: 291
	I0223 12:57:36.060851    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.060880    7621 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"357","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 12:57:36.107363    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:57:36.107563    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:57:36.107813    7621 round_trippers.go:463] GET https://127.0.0.1:51104/apis/storage.k8s.io/v1/storageclasses
	I0223 12:57:36.107819    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.107826    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.107835    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.132569    7621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 12:57:36.154288    7621 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 12:57:36.154302    7621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 12:57:36.154384    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:36.156665    7621 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0223 12:57:36.156685    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.156691    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.156698    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.156711    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.156719    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.156726    7621 round_trippers.go:580]     Content-Length: 109
	I0223 12:57:36.156732    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.156739    7621 round_trippers.go:580]     Audit-Id: 0f34f23d-8c22-47df-b655-26e2f7a8b4df
	I0223 12:57:36.156757    7621 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"366"},"items":[]}
	I0223 12:57:36.156967    7621 addons.go:227] Setting addon default-storageclass=true in "multinode-899000"
	I0223 12:57:36.156992    7621 host.go:66] Checking if "multinode-899000" exists ...
	I0223 12:57:36.157338    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:36.160767    7621 command_runner.go:130] > apiVersion: v1
	I0223 12:57:36.160791    7621 command_runner.go:130] > data:
	I0223 12:57:36.160795    7621 command_runner.go:130] >   Corefile: |
	I0223 12:57:36.160799    7621 command_runner.go:130] >     .:53 {
	I0223 12:57:36.160802    7621 command_runner.go:130] >         errors
	I0223 12:57:36.160806    7621 command_runner.go:130] >         health {
	I0223 12:57:36.160813    7621 command_runner.go:130] >            lameduck 5s
	I0223 12:57:36.160818    7621 command_runner.go:130] >         }
	I0223 12:57:36.160821    7621 command_runner.go:130] >         ready
	I0223 12:57:36.160828    7621 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 12:57:36.160833    7621 command_runner.go:130] >            pods insecure
	I0223 12:57:36.160839    7621 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 12:57:36.160845    7621 command_runner.go:130] >            ttl 30
	I0223 12:57:36.160850    7621 command_runner.go:130] >         }
	I0223 12:57:36.160853    7621 command_runner.go:130] >         prometheus :9153
	I0223 12:57:36.160865    7621 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 12:57:36.160874    7621 command_runner.go:130] >            max_concurrent 1000
	I0223 12:57:36.160878    7621 command_runner.go:130] >         }
	I0223 12:57:36.160883    7621 command_runner.go:130] >         cache 30
	I0223 12:57:36.160887    7621 command_runner.go:130] >         loop
	I0223 12:57:36.160890    7621 command_runner.go:130] >         reload
	I0223 12:57:36.160894    7621 command_runner.go:130] >         loadbalance
	I0223 12:57:36.160897    7621 command_runner.go:130] >     }
	I0223 12:57:36.160901    7621 command_runner.go:130] > kind: ConfigMap
	I0223 12:57:36.160904    7621 command_runner.go:130] > metadata:
	I0223 12:57:36.160910    7621 command_runner.go:130] >   creationTimestamp: "2023-02-23T20:57:22Z"
	I0223 12:57:36.160914    7621 command_runner.go:130] >   name: coredns
	I0223 12:57:36.160917    7621 command_runner.go:130] >   namespace: kube-system
	I0223 12:57:36.160921    7621 command_runner.go:130] >   resourceVersion: "235"
	I0223 12:57:36.160925    7621 command_runner.go:130] >   uid: 8f8b9516-d5e4-49cf-a150-eb5868d86ded
	I0223 12:57:36.161087    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 12:57:36.220070    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:36.220596    7621 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 12:57:36.220609    7621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 12:57:36.220672    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:36.284715    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:36.344918    7621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 12:57:36.453000    7621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 12:57:36.460379    7621 command_runner.go:130] > configmap/coredns replaced
	I0223 12:57:36.460408    7621 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0223 12:57:36.561205    7621 round_trippers.go:463] GET https://127.0.0.1:51104/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 12:57:36.561228    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.561235    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.561242    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.563946    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:36.563963    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.563970    7621 round_trippers.go:580]     Audit-Id: 3c26d50a-f7a7-499b-af55-cd9f8eb2d0ab
	I0223 12:57:36.563977    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.563984    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.563990    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.563994    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.564007    7621 round_trippers.go:580]     Content-Length: 291
	I0223 12:57:36.564026    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.564048    7621 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"366","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 12:57:36.564127    7621 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-899000" context rescaled to 1 replicas
	I0223 12:57:36.564155    7621 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 12:57:36.585743    7621 out.go:177] * Verifying Kubernetes components...
	I0223 12:57:36.627187    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:57:36.743198    7621 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0223 12:57:36.747083    7621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0223 12:57:36.753927    7621 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 12:57:36.759155    7621 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 12:57:36.772697    7621 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0223 12:57:36.836193    7621 command_runner.go:130] > pod/storage-provisioner created
	I0223 12:57:36.860851    7621 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0223 12:57:36.868076    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:36.907527    7621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0223 12:57:36.980297    7621 addons.go:492] enable addons completed in 939.672556ms: enabled=[storage-provisioner default-storageclass]
	I0223 12:57:36.992459    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:57:36.992713    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:57:36.992965    7621 node_ready.go:35] waiting up to 6m0s for node "multinode-899000" to be "Ready" ...
	I0223 12:57:36.993015    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:36.993021    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.993027    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.993032    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.996802    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:36.996823    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.996832    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.996840    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.996848    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.996855    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.996863    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.996869    7621 round_trippers.go:580]     Audit-Id: f0f9dd4c-b992-4485-9384-585881abd75e
	I0223 12:57:36.996967    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:36.997735    7621 node_ready.go:49] node "multinode-899000" has status "Ready":"True"
	I0223 12:57:36.997746    7621 node_ready.go:38] duration metric: took 4.76415ms waiting for node "multinode-899000" to be "Ready" ...
	I0223 12:57:36.997753    7621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 12:57:36.997805    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:36.997811    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.997817    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.997823    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.001898    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:37.001918    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.001927    7621 round_trippers.go:580]     Audit-Id: bdf8cacd-4af7-4bd3-ab48-8eddf650fd0b
	I0223 12:57:37.001934    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.001939    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.001944    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.001950    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.001962    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.003565    7621 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"380"},"items":[{"metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 60224 chars]
	I0223 12:57:37.006197    7621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-255qk" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:37.006244    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:37.006249    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:37.006256    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:37.006263    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.009384    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:37.009401    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.009408    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.009416    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.009423    7621 round_trippers.go:580]     Audit-Id: 3ebb44ee-ca50-4638-a97a-bb51bbce28d8
	I0223 12:57:37.009430    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.009439    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.009451    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.009555    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:37.009854    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:37.009864    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:37.009873    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:37.009883    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.012353    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:37.012374    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.012380    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.012385    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.012390    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.012395    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.012400    7621 round_trippers.go:580]     Audit-Id: f1d4595f-af15-4f73-a208-560768a68e81
	I0223 12:57:37.012405    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.012462    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:37.512791    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:37.512810    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:37.512818    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:37.512827    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.537589    7621 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0223 12:57:37.537607    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.537616    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.537622    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.537628    7621 round_trippers.go:580]     Audit-Id: 71bed69f-4164-46a2-a2a1-4bfd3fd2a2a6
	I0223 12:57:37.537635    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.537646    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.537654    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.538674    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:37.538989    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:37.538997    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:37.539003    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:37.539010    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.542803    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:37.542830    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.542846    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.542861    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.542870    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.542876    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.542882    7621 round_trippers.go:580]     Audit-Id: 4386b824-d716-4d25-9790-c3de02e3fb0b
	I0223 12:57:37.542890    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.542966    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:38.012954    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:38.012979    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:38.013031    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:38.013044    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:38.016760    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:38.016772    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:38.016778    7621 round_trippers.go:580]     Audit-Id: b08cfd64-ecf8-464e-9731-287102cfe4f5
	I0223 12:57:38.016783    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:38.016788    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:38.016793    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:38.016804    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:38.016810    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:38 GMT
	I0223 12:57:38.016870    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:38.017143    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:38.017149    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:38.017154    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:38.017161    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:38.019116    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:38.019125    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:38.019130    7621 round_trippers.go:580]     Audit-Id: abd5f201-42fc-4fab-a3b2-3cb256e7eb56
	I0223 12:57:38.019134    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:38.019139    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:38.019144    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:38.019149    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:38.019154    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:38 GMT
	I0223 12:57:38.019223    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:38.514981    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:38.515002    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:38.515015    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:38.515026    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:38.519296    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:38.519310    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:38.519316    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:38.519321    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:38.519326    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:38.519331    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:38.519336    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:38 GMT
	I0223 12:57:38.519341    7621 round_trippers.go:580]     Audit-Id: faa8eea9-2343-46b5-b6f9-643cfd45a748
	I0223 12:57:38.519406    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:38.519732    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:38.519739    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:38.519747    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:38.519752    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:38.522065    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:38.522073    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:38.522079    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:38.522083    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:38.522090    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:38.522095    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:38.522101    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:38 GMT
	I0223 12:57:38.522106    7621 round_trippers.go:580]     Audit-Id: 4c6dce51-d02c-45bd-96bc-f1655baa6949
	I0223 12:57:38.522164    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:39.014867    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:39.014879    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:39.014888    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:39.014897    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:39.019376    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:39.019404    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:39.019419    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:39.019429    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:39.019435    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:39.019441    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:39.019450    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:39 GMT
	I0223 12:57:39.019466    7621 round_trippers.go:580]     Audit-Id: 80e796a7-324f-402c-aa5e-bc94e0810dd9
	I0223 12:57:39.019567    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:39.020002    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:39.020012    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:39.020027    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:39.020041    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:39.023871    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:39.023907    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:39.023929    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:39.023937    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:39.023942    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:39 GMT
	I0223 12:57:39.023948    7621 round_trippers.go:580]     Audit-Id: b6dec412-53d3-4433-973c-837c9d2426df
	I0223 12:57:39.023953    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:39.023958    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:39.024109    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:39.024309    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:39.514707    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:39.514723    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:39.514733    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:39.514739    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:39.535462    7621 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0223 12:57:39.535474    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:39.535480    7621 round_trippers.go:580]     Audit-Id: 6a1be3c6-3b6b-43db-8129-ddd2009aa8be
	I0223 12:57:39.535485    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:39.535495    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:39.535501    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:39.535506    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:39.535512    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:39 GMT
	I0223 12:57:39.535574    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:39.535844    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:39.535850    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:39.535856    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:39.535861    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:39.538413    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:39.538430    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:39.538437    7621 round_trippers.go:580]     Audit-Id: 345651d6-9c0d-4dde-9273-66959f725b05
	I0223 12:57:39.538450    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:39.538461    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:39.538466    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:39.538471    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:39.538476    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:39 GMT
	I0223 12:57:39.538540    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:40.014828    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:40.014859    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:40.014923    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:40.014931    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:40.018508    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:40.018530    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:40.018539    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:40 GMT
	I0223 12:57:40.018544    7621 round_trippers.go:580]     Audit-Id: 1dffcbc6-de4c-441d-be9d-bd9ad27ccd94
	I0223 12:57:40.018549    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:40.018553    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:40.018558    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:40.018562    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:40.018651    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:40.018961    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:40.018968    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:40.018976    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:40.018989    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:40.021421    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:40.021432    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:40.021438    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:40.021443    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:40.021451    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:40 GMT
	I0223 12:57:40.021458    7621 round_trippers.go:580]     Audit-Id: 7e8457cf-7a16-4655-8693-9aa47bce26ad
	I0223 12:57:40.021464    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:40.021469    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:40.021535    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:40.513070    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:40.513091    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:40.513103    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:40.513114    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:40.517234    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:40.517249    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:40.517255    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:40.517263    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:40.517268    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:40.517273    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:40.517280    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:40 GMT
	I0223 12:57:40.517286    7621 round_trippers.go:580]     Audit-Id: f96ed6fa-bcb4-4c8b-bc49-941245c06d9a
	I0223 12:57:40.517347    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:40.517628    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:40.517634    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:40.517640    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:40.517646    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:40.519538    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:40.519547    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:40.519553    7621 round_trippers.go:580]     Audit-Id: 6818c246-e938-400d-a1f3-6b547c0e2c14
	I0223 12:57:40.519558    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:40.519563    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:40.519568    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:40.519573    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:40.519578    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:40 GMT
	I0223 12:57:40.519636    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:41.014271    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:41.014343    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:41.014358    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:41.014368    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:41.018448    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:41.018461    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:41.018472    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:41.018479    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:41 GMT
	I0223 12:57:41.018486    7621 round_trippers.go:580]     Audit-Id: 88efea70-7042-4966-b01d-5c6b4eae4d29
	I0223 12:57:41.018493    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:41.018499    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:41.018505    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:41.018591    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:41.018848    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:41.018854    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:41.018859    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:41.018865    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:41.021219    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:41.021229    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:41.021235    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:41.021241    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:41.021246    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:41 GMT
	I0223 12:57:41.021251    7621 round_trippers.go:580]     Audit-Id: 01866206-3a6c-4fa5-93ca-f2b19b3ae405
	I0223 12:57:41.021257    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:41.021262    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:41.021325    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:41.515042    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:41.515063    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:41.515075    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:41.515091    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:41.519528    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:41.519539    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:41.519545    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:41.519550    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:41.519556    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:41.519563    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:41.519568    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:41 GMT
	I0223 12:57:41.519573    7621 round_trippers.go:580]     Audit-Id: 92a528a6-61be-4433-b9ce-aea2231336aa
	I0223 12:57:41.519778    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:41.520049    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:41.520057    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:41.520065    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:41.520073    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:41.522204    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:41.522214    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:41.522221    7621 round_trippers.go:580]     Audit-Id: a28d9c1d-7dd5-4e92-8e3b-5bc0f2c1495b
	I0223 12:57:41.522228    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:41.522236    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:41.522241    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:41.522246    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:41.522251    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:41 GMT
	I0223 12:57:41.522529    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:41.522715    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:42.015184    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:42.015204    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:42.015217    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:42.015227    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:42.019399    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:42.019416    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:42.019422    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:42.019427    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:42.019431    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:42.019437    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:42.019447    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:42 GMT
	I0223 12:57:42.019452    7621 round_trippers.go:580]     Audit-Id: c3c7d015-1b01-4fc7-9584-693d9efea2d0
	I0223 12:57:42.019513    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:42.019789    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:42.019795    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:42.019801    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:42.019807    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:42.022150    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:42.022160    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:42.022165    7621 round_trippers.go:580]     Audit-Id: 6cd5d782-1f60-4f2f-9ffc-b576d89df22f
	I0223 12:57:42.022170    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:42.022183    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:42.022189    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:42.022194    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:42.022199    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:42 GMT
	I0223 12:57:42.022276    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:42.512925    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:42.512939    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:42.512958    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:42.512964    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:42.515730    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:42.515743    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:42.515749    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:42.515755    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:42.515763    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:42 GMT
	I0223 12:57:42.515768    7621 round_trippers.go:580]     Audit-Id: 9889836c-50e1-473d-aba6-d9410a5c0316
	I0223 12:57:42.515773    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:42.515778    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:42.515846    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:42.516196    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:42.516203    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:42.516209    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:42.516215    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:42.518461    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:42.518473    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:42.518481    7621 round_trippers.go:580]     Audit-Id: 307c83df-61cf-4812-9aa9-4f95df344503
	I0223 12:57:42.518486    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:42.518492    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:42.518498    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:42.518502    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:42.518508    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:42 GMT
	I0223 12:57:42.518570    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:43.013393    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:43.013414    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:43.013426    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:43.013436    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:43.037715    7621 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0223 12:57:43.037739    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:43.037748    7621 round_trippers.go:580]     Audit-Id: 9601bb68-0ae3-473f-8600-a8b450d46691
	I0223 12:57:43.037755    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:43.037764    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:43.037774    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:43.037785    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:43.037795    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:43 GMT
	I0223 12:57:43.037974    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:43.038444    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:43.038461    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:43.038473    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:43.038485    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:43.041542    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:43.041564    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:43.041581    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:43.041591    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:43.041597    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:43.041608    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:43 GMT
	I0223 12:57:43.041622    7621 round_trippers.go:580]     Audit-Id: defe9c98-2a34-4afb-abaa-124d5238f440
	I0223 12:57:43.041635    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:43.041765    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:43.513808    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:43.513829    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:43.513841    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:43.513855    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:43.536676    7621 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0223 12:57:43.536699    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:43.536713    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:43.536723    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:43.536728    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:43 GMT
	I0223 12:57:43.536734    7621 round_trippers.go:580]     Audit-Id: 546c3cfe-00ca-4dae-8b95-ad8a9b3ad7ce
	I0223 12:57:43.536740    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:43.536746    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:43.536845    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:43.537151    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:43.537158    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:43.537164    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:43.537169    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:43.539378    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:43.539394    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:43.539403    7621 round_trippers.go:580]     Audit-Id: 9b39f22e-4c86-4f5c-b7da-d15f9ffa01b1
	I0223 12:57:43.539410    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:43.539416    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:43.539423    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:43.539429    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:43.539434    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:43 GMT
	I0223 12:57:43.539502    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:43.539712    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:44.015105    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:44.015125    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:44.015137    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:44.015147    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:44.037529    7621 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0223 12:57:44.037548    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:44.037556    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:44.037565    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:44.037575    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:44 GMT
	I0223 12:57:44.037584    7621 round_trippers.go:580]     Audit-Id: bfb6cc20-39cb-4a64-82a9-4b6dface125d
	I0223 12:57:44.037595    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:44.037605    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:44.037693    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:44.038083    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:44.038092    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:44.038114    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:44.038120    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:44.040623    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:44.040639    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:44.040648    7621 round_trippers.go:580]     Audit-Id: 90f2c301-3066-4c6e-9217-49a4baceaa01
	I0223 12:57:44.040662    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:44.040675    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:44.040687    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:44.040707    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:44.040716    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:44 GMT
	I0223 12:57:44.040901    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:44.513272    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:44.513291    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:44.513303    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:44.513313    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:44.536386    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:44.536426    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:44.536442    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:44.536453    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:44.536463    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:44 GMT
	I0223 12:57:44.536477    7621 round_trippers.go:580]     Audit-Id: c6ad94bd-fe85-4fce-ae78-356285bbd1b8
	I0223 12:57:44.536500    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:44.536520    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:44.537149    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:44.537566    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:44.537577    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:44.537586    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:44.537592    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:44.540011    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:44.540024    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:44.540030    7621 round_trippers.go:580]     Audit-Id: 68dd7b06-5f1d-4848-8a96-e8defec195b1
	I0223 12:57:44.540042    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:44.540047    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:44.540053    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:44.540060    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:44.540065    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:44 GMT
	I0223 12:57:44.540126    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:45.013158    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:45.013180    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:45.013195    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:45.013204    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:45.036554    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:45.036576    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:45.036587    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:45.036597    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:45.036607    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:45.036621    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:45 GMT
	I0223 12:57:45.036632    7621 round_trippers.go:580]     Audit-Id: cc4181fe-a211-48a3-b00a-c24bb22e4237
	I0223 12:57:45.036642    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:45.036753    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:45.037242    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:45.037253    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:45.037265    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:45.037279    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:45.040736    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:45.040751    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:45.040760    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:45.040769    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:45 GMT
	I0223 12:57:45.040780    7621 round_trippers.go:580]     Audit-Id: accab3ac-a9cf-427b-bcef-70e328e7bf0e
	I0223 12:57:45.040798    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:45.040806    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:45.040812    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:45.041491    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:45.513687    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:45.513707    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:45.513719    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:45.513729    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:45.537559    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:45.537577    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:45.537585    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:45.537594    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:45.537600    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:45.537609    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:45 GMT
	I0223 12:57:45.537618    7621 round_trippers.go:580]     Audit-Id: dec67177-e9d4-41ba-a87d-657c61ff2373
	I0223 12:57:45.537625    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:45.537712    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:45.538102    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:45.538108    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:45.538114    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:45.538119    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:45.540504    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:45.540515    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:45.540520    7621 round_trippers.go:580]     Audit-Id: a22286fa-6b13-446b-9664-b027bc6ce8c4
	I0223 12:57:45.540525    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:45.540530    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:45.540535    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:45.540543    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:45.540548    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:45 GMT
	I0223 12:57:45.540617    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:45.540835    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:46.014347    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:46.014368    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:46.014380    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:46.014390    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:46.036415    7621 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0223 12:57:46.036433    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:46.036440    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:46.036447    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:46 GMT
	I0223 12:57:46.036454    7621 round_trippers.go:580]     Audit-Id: 2b513dcc-ebe9-4a3e-9e2a-97c60684bd50
	I0223 12:57:46.036462    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:46.036473    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:46.036479    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:46.036563    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:46.036973    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:46.036980    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:46.036986    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:46.036993    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:46.039212    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:46.039225    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:46.039233    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:46.039241    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:46.039247    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:46 GMT
	I0223 12:57:46.039252    7621 round_trippers.go:580]     Audit-Id: 164ba9f0-7204-44a2-a8af-c21c95b04b54
	I0223 12:57:46.039258    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:46.039288    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:46.039383    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:46.513395    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:46.513416    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:46.513429    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:46.513439    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:46.538393    7621 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0223 12:57:46.538412    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:46.538421    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:46.538428    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:46.538434    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:46 GMT
	I0223 12:57:46.538441    7621 round_trippers.go:580]     Audit-Id: 323ad531-874d-430d-be12-ca0bebe1a142
	I0223 12:57:46.538447    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:46.538453    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:46.538545    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:46.538842    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:46.538852    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:46.538858    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:46.538863    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:46.541442    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:46.541455    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:46.541463    7621 round_trippers.go:580]     Audit-Id: e2220d08-7521-4b3a-b1eb-678b93056002
	I0223 12:57:46.541478    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:46.541485    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:46.541491    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:46.541499    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:46.541505    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:46 GMT
	I0223 12:57:46.542226    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:47.014109    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:47.014130    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:47.014142    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:47.014152    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:47.037909    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:47.037926    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:47.037935    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:47.037942    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:47.037949    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:47.037955    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:47 GMT
	I0223 12:57:47.037962    7621 round_trippers.go:580]     Audit-Id: 7da4ee61-126c-4e55-af6c-b92b3e58d1c3
	I0223 12:57:47.037969    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:47.038268    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:47.038588    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:47.038595    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:47.038603    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:47.038609    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:47.040988    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:47.041001    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:47.041010    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:47.041030    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:47.041041    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:47.041050    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:47.041058    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:47 GMT
	I0223 12:57:47.041065    7621 round_trippers.go:580]     Audit-Id: 0f28c6f2-a920-40ba-85e5-229b65615408
	I0223 12:57:47.041152    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:47.513505    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:47.513527    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:47.513540    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:47.513550    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:47.537667    7621 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0223 12:57:47.537698    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:47.537713    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:47.537724    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:47.537740    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:47.537757    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:47 GMT
	I0223 12:57:47.537779    7621 round_trippers.go:580]     Audit-Id: ef0e5e01-0cae-4caa-8cca-28e0b2d4abd1
	I0223 12:57:47.537799    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:47.537970    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:47.538355    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:47.538363    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:47.538369    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:47.538374    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:47.540864    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:47.540882    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:47.540890    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:47.540897    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:47.540908    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:47.540913    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:47 GMT
	I0223 12:57:47.540919    7621 round_trippers.go:580]     Audit-Id: 6cf7f39a-ccc9-4376-a891-1f64fabc16ac
	I0223 12:57:47.540926    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:47.541027    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:47.541237    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:48.015170    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:48.015192    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:48.015204    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:48.015214    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:48.037601    7621 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0223 12:57:48.037619    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:48.037627    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:48.037634    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:48.037640    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:48 GMT
	I0223 12:57:48.037647    7621 round_trippers.go:580]     Audit-Id: e024aab2-c046-4ded-a4a4-0c12e1c032c0
	I0223 12:57:48.037657    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:48.037666    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:48.037838    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:48.038126    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:48.038134    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:48.038140    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:48.038145    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:48.040555    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:48.040568    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:48.040575    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:48.040580    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:48 GMT
	I0223 12:57:48.040585    7621 round_trippers.go:580]     Audit-Id: 276b01ca-4c9c-4586-bff8-db22e37efb00
	I0223 12:57:48.040590    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:48.040594    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:48.040600    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:48.040688    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:48.513541    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:48.513563    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:48.513575    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:48.513586    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:48.540436    7621 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0223 12:57:48.540451    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:48.540457    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:48.540462    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:48.540466    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:48 GMT
	I0223 12:57:48.540470    7621 round_trippers.go:580]     Audit-Id: 9b05377d-0ece-4bb2-85bb-0693dc44b384
	I0223 12:57:48.540475    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:48.540486    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:48.540548    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:48.540840    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:48.540846    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:48.540852    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:48.540857    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:48.543399    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:48.543415    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:48.543430    7621 round_trippers.go:580]     Audit-Id: 39da13d8-d350-4009-a0dd-6d82f548e31d
	I0223 12:57:48.543440    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:48.543446    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:48.543451    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:48.543456    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:48.543463    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:48 GMT
	I0223 12:57:48.543592    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:49.014115    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:49.014139    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:49.014152    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:49.014162    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:49.037964    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:49.037981    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:49.037989    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:49.037996    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:49.038002    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:49.038008    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:49.038015    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:49 GMT
	I0223 12:57:49.038021    7621 round_trippers.go:580]     Audit-Id: 31a008c1-b2f4-4487-922f-265333c2e818
	I0223 12:57:49.038097    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:49.038438    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:49.038444    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:49.038450    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:49.038456    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:49.040482    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:49.040491    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:49.040497    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:49 GMT
	I0223 12:57:49.040508    7621 round_trippers.go:580]     Audit-Id: 4901e121-c25c-420f-892b-51afebcef866
	I0223 12:57:49.040514    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:49.040518    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:49.040523    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:49.040528    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:49.040614    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:49.513919    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:49.513940    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:49.513952    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:49.513963    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:49.537507    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:49.537528    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:49.537548    7621 round_trippers.go:580]     Audit-Id: e452cc89-ed81-48fc-a827-5e8b6f3b6a3f
	I0223 12:57:49.537556    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:49.537564    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:49.537574    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:49.537584    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:49.537591    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:49 GMT
	I0223 12:57:49.537700    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:49.538117    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:49.538125    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:49.538134    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:49.538142    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:49.540791    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:49.540803    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:49.540808    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:49.540815    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:49.540823    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:49.540829    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:49.540834    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:49 GMT
	I0223 12:57:49.540838    7621 round_trippers.go:580]     Audit-Id: 0838230f-6504-4de1-817c-0a4f57400a68
	I0223 12:57:49.540931    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:50.015144    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:50.015170    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:50.015183    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:50.015192    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:50.035960    7621 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0223 12:57:50.035983    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:50.035995    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:50.036006    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:50.036017    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:50.036037    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:50.036054    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:50 GMT
	I0223 12:57:50.036069    7621 round_trippers.go:580]     Audit-Id: e94aef2b-8a67-46cd-8f8a-d7f092b594fa
	I0223 12:57:50.036289    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:50.036582    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:50.036588    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:50.036596    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:50.036603    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:50.038884    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:50.038895    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:50.038901    7621 round_trippers.go:580]     Audit-Id: 146a91ee-e1fe-48ed-ab36-cec6556ba195
	I0223 12:57:50.038906    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:50.038930    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:50.038936    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:50.038941    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:50.038945    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:50 GMT
	I0223 12:57:50.039010    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:50.039216    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:50.515214    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:50.515234    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:50.515246    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:50.515256    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:50.537319    7621 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0223 12:57:50.537343    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:50.537355    7621 round_trippers.go:580]     Audit-Id: f0efffdd-951d-4b86-8f99-639827385670
	I0223 12:57:50.537365    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:50.537375    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:50.537390    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:50.537410    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:50.537429    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:50 GMT
	I0223 12:57:50.537552    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:50.537940    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:50.537948    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:50.537955    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:50.537962    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:50.540450    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:50.540465    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:50.540473    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:50.540480    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:50 GMT
	I0223 12:57:50.540488    7621 round_trippers.go:580]     Audit-Id: ad262c15-82c2-4601-9b0b-71967af7b575
	I0223 12:57:50.540497    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:50.540503    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:50.540508    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:50.540568    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:51.013588    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:51.013600    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:51.013606    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:51.013611    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:51.016722    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:51.016738    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:51.016744    7621 round_trippers.go:580]     Audit-Id: cbc0669b-f725-4abc-8b8a-fdbd07e43e35
	I0223 12:57:51.016749    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:51.016785    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:51.016798    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:51.016803    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:51.016812    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:51 GMT
	I0223 12:57:51.016874    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:51.017188    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:51.017196    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:51.017204    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:51.017209    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:51.019344    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:51.019355    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:51.019363    7621 round_trippers.go:580]     Audit-Id: 14a89371-1a2d-4b0b-8348-2d1e93755a74
	I0223 12:57:51.019368    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:51.019373    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:51.019378    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:51.019383    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:51.019389    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:51 GMT
	I0223 12:57:51.019635    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:51.513267    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:51.513287    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:51.513300    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:51.513310    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:51.516778    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:51.516790    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:51.516795    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:51.516800    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:51.516804    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:51 GMT
	I0223 12:57:51.516810    7621 round_trippers.go:580]     Audit-Id: ac6934bf-d595-4319-8928-0e7a0b139ab5
	I0223 12:57:51.516815    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:51.516820    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:51.517403    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:51.518013    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:51.518023    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:51.518034    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:51.518080    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:51.521081    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:51.521094    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:51.521100    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:51.521106    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:51.521110    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:51.521118    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:51 GMT
	I0223 12:57:51.521123    7621 round_trippers.go:580]     Audit-Id: bfe87b3d-e10e-4716-bb9e-e5ef01179e23
	I0223 12:57:51.521128    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:51.521185    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.013907    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:52.013928    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.013942    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.013952    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.017838    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:52.017863    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.017872    7621 round_trippers.go:580]     Audit-Id: 1d326f0a-8fe5-4e3a-a107-df17c6c0bfb6
	I0223 12:57:52.017879    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.017886    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.017892    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.017900    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.017910    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.017997    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 12:57:52.018322    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.018328    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.018333    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.018339    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.020347    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:52.020356    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.020362    7621 round_trippers.go:580]     Audit-Id: 50a34c55-8ff9-4cc3-8eeb-afd78b48cb80
	I0223 12:57:52.020367    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.020372    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.020377    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.020382    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.020387    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.020453    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.020632    7621 pod_ready.go:92] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.020643    7621 pod_ready.go:81] duration metric: took 15.014158981s waiting for pod "coredns-787d4945fb-255qk" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.020651    7621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-fllr8" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.020682    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fllr8
	I0223 12:57:52.020687    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.020693    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.020701    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.022630    7621 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0223 12:57:52.022639    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.022645    7621 round_trippers.go:580]     Audit-Id: 31ac6d28-4f4e-4ca8-a999-9455654d0f8e
	I0223 12:57:52.022653    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.022659    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.022676    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.022684    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.022690    7621 round_trippers.go:580]     Content-Length: 216
	I0223 12:57:52.022696    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.022708    7621 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-fllr8\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-fllr8","kind":"pods"},"code":404}
	I0223 12:57:52.022816    7621 pod_ready.go:97] error getting pod "coredns-787d4945fb-fllr8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-fllr8" not found
	I0223 12:57:52.022823    7621 pod_ready.go:81] duration metric: took 2.166161ms waiting for pod "coredns-787d4945fb-fllr8" in "kube-system" namespace to be "Ready" ...
	E0223 12:57:52.022829    7621 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-fllr8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-fllr8" not found
	I0223 12:57:52.022837    7621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.022861    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/etcd-multinode-899000
	I0223 12:57:52.022868    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.022873    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.022879    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.024862    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:52.024870    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.024876    7621 round_trippers.go:580]     Audit-Id: 79afbc66-7802-45ee-8b5d-182ed3438ac9
	I0223 12:57:52.024881    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.024886    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.024891    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.024896    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.024901    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.024946    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899000","namespace":"kube-system","uid":"04c36b20-3f1c-4967-be88-dfaf04e459fb","resourceVersion":"273","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"566ae0c6f1e5eb2cbf1380e3d7174fa3","kubernetes.io/config.mirror":"566ae0c6f1e5eb2cbf1380e3d7174fa3","kubernetes.io/config.seen":"2023-02-23T20:57:22.892805434Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 12:57:52.025159    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.025165    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.025171    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.025177    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.027383    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.027395    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.027400    7621 round_trippers.go:580]     Audit-Id: e5256895-7ce4-4a93-985a-983e6a92f71b
	I0223 12:57:52.027406    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.027411    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.027416    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.027420    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.027425    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.027505    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.027688    7621 pod_ready.go:92] pod "etcd-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.027695    7621 pod_ready.go:81] duration metric: took 4.853774ms waiting for pod "etcd-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.027702    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.027730    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899000
	I0223 12:57:52.027734    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.027739    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.027746    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.029635    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:52.029644    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.029649    7621 round_trippers.go:580]     Audit-Id: ec57d51b-2886-46e6-866c-2d3df1e4fe35
	I0223 12:57:52.029658    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.029664    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.029670    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.029674    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.029680    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.029742    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899000","namespace":"kube-system","uid":"8f2e9b4f-7407-4a4f-86d7-cbaa54f4982b","resourceVersion":"275","creationTimestamp":"2023-02-23T20:57:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"04b8445a9cf4f56fec75b4c565d27f23","kubernetes.io/config.mirror":"04b8445a9cf4f56fec75b4c565d27f23","kubernetes.io/config.seen":"2023-02-23T20:57:13.277278836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 12:57:52.030018    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.030024    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.030030    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.030035    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.032046    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.032056    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.032065    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.032070    7621 round_trippers.go:580]     Audit-Id: c324aed6-1792-4b08-ad2b-d70633205de5
	I0223 12:57:52.032075    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.032080    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.032087    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.032092    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.032136    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.032302    7621 pod_ready.go:92] pod "kube-apiserver-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.032307    7621 pod_ready.go:81] duration metric: took 4.599631ms waiting for pod "kube-apiserver-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.032313    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.032339    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899000
	I0223 12:57:52.032343    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.032350    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.032358    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.034377    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.034388    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.034396    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.034402    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.034407    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.034413    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.034419    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.034424    7621 round_trippers.go:580]     Audit-Id: 980ae714-349b-400d-b826-3c0178a86978
	I0223 12:57:52.034493    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899000","namespace":"kube-system","uid":"8a9821eb-106e-43fb-919d-59f0d6132887","resourceVersion":"301","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02827c95207bba4f962be58bf081b453","kubernetes.io/config.mirror":"02827c95207bba4f962be58bf081b453","kubernetes.io/config.seen":"2023-02-23T20:57:22.892794347Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 12:57:52.034741    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.034747    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.034753    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.034758    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.036930    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.036938    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.036944    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.036948    7621 round_trippers.go:580]     Audit-Id: 5d07398b-1852-40c0-a5b8-d2040ed95ffa
	I0223 12:57:52.036954    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.036958    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.036964    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.036969    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.037035    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.037206    7621 pod_ready.go:92] pod "kube-controller-manager-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.037212    7621 pod_ready.go:81] duration metric: took 4.8941ms waiting for pod "kube-controller-manager-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.037219    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w885m" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.037248    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-w885m
	I0223 12:57:52.037252    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.037258    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.037264    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.039374    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.039383    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.039389    7621 round_trippers.go:580]     Audit-Id: 9147538d-969a-4301-ad43-999a043f8b58
	I0223 12:57:52.039394    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.039400    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.039408    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.039414    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.039419    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.039475    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w885m","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e1284e2-dcb3-408c-bc90-a501107f7e23","resourceVersion":"397","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 12:57:52.214086    7621 request.go:622] Waited for 174.278826ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.214143    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.214153    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.214171    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.214182    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.217595    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:52.217611    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.217616    7621 round_trippers.go:580]     Audit-Id: 1196ae99-b719-4d9b-b625-d61fdd5b8668
	I0223 12:57:52.217622    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.217627    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.217632    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.217637    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.217645    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.217712    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.217943    7621 pod_ready.go:92] pod "kube-proxy-w885m" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.217957    7621 pod_ready.go:81] duration metric: took 180.729704ms waiting for pod "kube-proxy-w885m" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.217963    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.413876    7621 request.go:622] Waited for 195.871547ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899000
	I0223 12:57:52.413947    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899000
	I0223 12:57:52.413952    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.413959    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.413965    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.416833    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.416843    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.416849    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.416854    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.416859    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.416864    7621 round_trippers.go:580]     Audit-Id: 272fc67b-d140-4985-b548-c85b1ce81f03
	I0223 12:57:52.416870    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.416874    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.416948    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899000","namespace":"kube-system","uid":"b864a38e-68d2-4949-92a9-0f736cbdf7fe","resourceVersion":"296","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bad6109cbec6cd514239122749558677","kubernetes.io/config.mirror":"bad6109cbec6cd514239122749558677","kubernetes.io/config.seen":"2023-02-23T20:57:22.892804438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 12:57:52.613918    7621 request.go:622] Waited for 196.719938ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.613981    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.613993    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.614005    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.614016    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.617690    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:52.617702    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.617708    7621 round_trippers.go:580]     Audit-Id: b3a6a5d0-4a80-46b3-a54f-53e427bd43b5
	I0223 12:57:52.617713    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.617718    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.617723    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.617728    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.617733    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.617796    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.617982    7621 pod_ready.go:92] pod "kube-scheduler-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.617988    7621 pod_ready.go:81] duration metric: took 400.011941ms waiting for pod "kube-scheduler-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.617994    7621 pod_ready.go:38] duration metric: took 15.619951169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 12:57:52.618009    7621 api_server.go:51] waiting for apiserver process to appear ...
	I0223 12:57:52.618069    7621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 12:57:52.627311    7621 command_runner.go:130] > 1885
	I0223 12:57:52.627987    7621 api_server.go:71] duration metric: took 16.063512433s to wait for apiserver process to appear ...
	I0223 12:57:52.628000    7621 api_server.go:87] waiting for apiserver healthz status ...
	I0223 12:57:52.628011    7621 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51104/healthz ...
	I0223 12:57:52.634019    7621 api_server.go:278] https://127.0.0.1:51104/healthz returned 200:
	ok
	I0223 12:57:52.634053    7621 round_trippers.go:463] GET https://127.0.0.1:51104/version
	I0223 12:57:52.634057    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.634064    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.634070    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.635235    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:52.635247    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.635253    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.635259    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.635264    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.635270    7621 round_trippers.go:580]     Content-Length: 263
	I0223 12:57:52.635274    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.635280    7621 round_trippers.go:580]     Audit-Id: 7ccf1e6f-08a5-4c76-9dab-92bdd8b4242d
	I0223 12:57:52.635285    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.635294    7621 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 12:57:52.635337    7621 api_server.go:140] control plane version: v1.26.1
	I0223 12:57:52.635344    7621 api_server.go:130] duration metric: took 7.339719ms to wait for apiserver health ...
	I0223 12:57:52.635348    7621 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 12:57:52.815960    7621 request.go:622] Waited for 180.563441ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:52.816069    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:52.816081    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.816094    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.816106    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.821756    7621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 12:57:52.821773    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.821783    7621 round_trippers.go:580]     Audit-Id: e8f493f6-e476-42c9-a1c5-2a0d9b2068d1
	I0223 12:57:52.821790    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.821800    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.821805    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.821810    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.821815    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.822596    7621 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 12:57:52.823871    7621 system_pods.go:59] 8 kube-system pods found
	I0223 12:57:52.823884    7621 system_pods.go:61] "coredns-787d4945fb-255qk" [b14a01e5-36d7-4404-9478-12ce93233303] Running
	I0223 12:57:52.823890    7621 system_pods.go:61] "etcd-multinode-899000" [04c36b20-3f1c-4967-be88-dfaf04e459fb] Running
	I0223 12:57:52.823894    7621 system_pods.go:61] "kindnet-gvns6" [4583b1ff-e149-4409-a263-2b75532c1b48] Running
	I0223 12:57:52.823898    7621 system_pods.go:61] "kube-apiserver-multinode-899000" [8f2e9b4f-7407-4a4f-86d7-cbaa54f4982b] Running
	I0223 12:57:52.823902    7621 system_pods.go:61] "kube-controller-manager-multinode-899000" [8a9821eb-106e-43fb-919d-59f0d6132887] Running
	I0223 12:57:52.823906    7621 system_pods.go:61] "kube-proxy-w885m" [9e1284e2-dcb3-408c-bc90-a501107f7e23] Running
	I0223 12:57:52.823910    7621 system_pods.go:61] "kube-scheduler-multinode-899000" [b864a38e-68d2-4949-92a9-0f736cbdf7fe] Running
	I0223 12:57:52.823914    7621 system_pods.go:61] "storage-provisioner" [1cdb29ef-26cb-4ab3-a7f9-c455dfda76d9] Running
	I0223 12:57:52.823918    7621 system_pods.go:74] duration metric: took 188.562695ms to wait for pod list to return data ...
	I0223 12:57:52.823925    7621 default_sa.go:34] waiting for default service account to be created ...
	I0223 12:57:53.015033    7621 request.go:622] Waited for 191.045495ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/default/serviceaccounts
	I0223 12:57:53.015118    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/default/serviceaccounts
	I0223 12:57:53.015126    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:53.015138    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:53.015149    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:53.019608    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:53.019621    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:53.019626    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:53.019631    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:53.019640    7621 round_trippers.go:580]     Content-Length: 261
	I0223 12:57:53.019644    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:53 GMT
	I0223 12:57:53.019650    7621 round_trippers.go:580]     Audit-Id: 112cfe23-2662-4a95-8c4b-64ece10582f0
	I0223 12:57:53.019657    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:53.019663    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:53.019676    7621 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0e104d57-3e04-4d24-8671-d465a61acfa7","resourceVersion":"312","creationTimestamp":"2023-02-23T20:57:35Z"}}]}
	I0223 12:57:53.019781    7621 default_sa.go:45] found service account: "default"
	I0223 12:57:53.019788    7621 default_sa.go:55] duration metric: took 195.854678ms for default service account to be created ...
	I0223 12:57:53.019793    7621 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 12:57:53.214058    7621 request.go:622] Waited for 194.226374ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:53.214093    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:53.214099    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:53.214111    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:53.214154    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:53.217977    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:53.217988    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:53.217994    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:53.217999    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:53.218008    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:53 GMT
	I0223 12:57:53.218014    7621 round_trippers.go:580]     Audit-Id: 1970a5d2-3fca-4726-9bc2-c0a7594f4d4e
	I0223 12:57:53.218019    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:53.218024    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:53.218698    7621 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 12:57:53.219999    7621 system_pods.go:86] 8 kube-system pods found
	I0223 12:57:53.220008    7621 system_pods.go:89] "coredns-787d4945fb-255qk" [b14a01e5-36d7-4404-9478-12ce93233303] Running
	I0223 12:57:53.220012    7621 system_pods.go:89] "etcd-multinode-899000" [04c36b20-3f1c-4967-be88-dfaf04e459fb] Running
	I0223 12:57:53.220016    7621 system_pods.go:89] "kindnet-gvns6" [4583b1ff-e149-4409-a263-2b75532c1b48] Running
	I0223 12:57:53.220020    7621 system_pods.go:89] "kube-apiserver-multinode-899000" [8f2e9b4f-7407-4a4f-86d7-cbaa54f4982b] Running
	I0223 12:57:53.220025    7621 system_pods.go:89] "kube-controller-manager-multinode-899000" [8a9821eb-106e-43fb-919d-59f0d6132887] Running
	I0223 12:57:53.220029    7621 system_pods.go:89] "kube-proxy-w885m" [9e1284e2-dcb3-408c-bc90-a501107f7e23] Running
	I0223 12:57:53.220032    7621 system_pods.go:89] "kube-scheduler-multinode-899000" [b864a38e-68d2-4949-92a9-0f736cbdf7fe] Running
	I0223 12:57:53.220038    7621 system_pods.go:89] "storage-provisioner" [1cdb29ef-26cb-4ab3-a7f9-c455dfda76d9] Running
	I0223 12:57:53.220044    7621 system_pods.go:126] duration metric: took 200.242956ms to wait for k8s-apps to be running ...
	I0223 12:57:53.220051    7621 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 12:57:53.220107    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:57:53.229785    7621 system_svc.go:56] duration metric: took 9.728758ms WaitForService to wait for kubelet.
	I0223 12:57:53.229797    7621 kubeadm.go:578] duration metric: took 16.665310693s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 12:57:53.229810    7621 node_conditions.go:102] verifying NodePressure condition ...
	I0223 12:57:53.414181    7621 request.go:622] Waited for 184.278772ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes
	I0223 12:57:53.414239    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes
	I0223 12:57:53.414247    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:53.414260    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:53.414271    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:53.418096    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:53.418113    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:53.418121    7621 round_trippers.go:580]     Audit-Id: 0855f447-43ce-46da-ad93-3fbe83589606
	I0223 12:57:53.418128    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:53.418134    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:53.418141    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:53.418149    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:53.418155    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:53 GMT
	I0223 12:57:53.418255    7621 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5007 chars]
	I0223 12:57:53.418506    7621 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0223 12:57:53.418519    7621 node_conditions.go:123] node cpu capacity is 6
	I0223 12:57:53.418528    7621 node_conditions.go:105] duration metric: took 188.711451ms to run NodePressure ...
	I0223 12:57:53.418535    7621 start.go:228] waiting for startup goroutines ...
	I0223 12:57:53.418541    7621 start.go:233] waiting for cluster config update ...
	I0223 12:57:53.418551    7621 start.go:242] writing updated cluster config ...
	I0223 12:57:53.440182    7621 out.go:177] 
	I0223 12:57:53.462685    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:57:53.462788    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:57:53.485351    7621 out.go:177] * Starting worker node multinode-899000-m02 in cluster multinode-899000
	I0223 12:57:53.506982    7621 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 12:57:53.528371    7621 out.go:177] * Pulling base image ...
	I0223 12:57:53.571022    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:57:53.571055    7621 cache.go:57] Caching tarball of preloaded images
	I0223 12:57:53.571092    7621 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 12:57:53.571236    7621 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 12:57:53.571256    7621 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 12:57:53.571369    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:57:53.630104    7621 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 12:57:53.630125    7621 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 12:57:53.630147    7621 cache.go:193] Successfully downloaded all kic artifacts
	I0223 12:57:53.630187    7621 start.go:364] acquiring machines lock for multinode-899000-m02: {Name:mk5c03a1afa4b7b0e0a809f52d581925fe861d81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 12:57:53.630481    7621 start.go:368] acquired machines lock for "multinode-899000-m02" in 282.935µs
	I0223 12:57:53.630511    7621 start.go:93] Provisioning new machine with config: &{Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 12:57:53.630574    7621 start.go:125] createHost starting for "m02" (driver="docker")
	I0223 12:57:53.652550    7621 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 12:57:53.652798    7621 start.go:159] libmachine.API.Create for "multinode-899000" (driver="docker")
	I0223 12:57:53.652835    7621 client.go:168] LocalClient.Create starting
	I0223 12:57:53.653017    7621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 12:57:53.653098    7621 main.go:141] libmachine: Decoding PEM data...
	I0223 12:57:53.653125    7621 main.go:141] libmachine: Parsing certificate...
	I0223 12:57:53.653228    7621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 12:57:53.653280    7621 main.go:141] libmachine: Decoding PEM data...
	I0223 12:57:53.653300    7621 main.go:141] libmachine: Parsing certificate...
	I0223 12:57:53.674643    7621 cli_runner.go:164] Run: docker network inspect multinode-899000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 12:57:53.731836    7621 network_create.go:76] Found existing network {name:multinode-899000 subnet:0xc0005581e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0223 12:57:53.731878    7621 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-899000-m02" container
	I0223 12:57:53.732001    7621 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 12:57:53.787889    7621 cli_runner.go:164] Run: docker volume create multinode-899000-m02 --label name.minikube.sigs.k8s.io=multinode-899000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0223 12:57:53.843787    7621 oci.go:103] Successfully created a docker volume multinode-899000-m02
	I0223 12:57:53.843920    7621 cli_runner.go:164] Run: docker run --rm --name multinode-899000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-899000-m02 --entrypoint /usr/bin/test -v multinode-899000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 12:57:54.283218    7621 oci.go:107] Successfully prepared a docker volume multinode-899000-m02
	I0223 12:57:54.283253    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:57:54.283266    7621 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 12:57:54.283379    7621 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-899000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 12:58:00.609103    7621 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-899000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.325541285s)
	I0223 12:58:00.609123    7621 kic.go:199] duration metric: took 6.325740 seconds to extract preloaded images to volume
	I0223 12:58:00.609225    7621 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 12:58:00.749644    7621 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-899000-m02 --name multinode-899000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-899000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-899000-m02 --network multinode-899000 --ip 192.168.58.3 --volume multinode-899000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 12:58:01.114569    7621 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Running}}
	I0223 12:58:01.179142    7621 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Status}}
	I0223 12:58:01.242270    7621 cli_runner.go:164] Run: docker exec multinode-899000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0223 12:58:01.358626    7621 oci.go:144] the created container "multinode-899000-m02" has a running status.
	I0223 12:58:01.358651    7621 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa...
	I0223 12:58:01.597296    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 12:58:01.597354    7621 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 12:58:01.698107    7621 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Status}}
	I0223 12:58:01.755893    7621 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 12:58:01.755914    7621 kic_runner.go:114] Args: [docker exec --privileged multinode-899000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 12:58:01.855680    7621 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Status}}
	I0223 12:58:01.912416    7621 machine.go:88] provisioning docker machine ...
	I0223 12:58:01.912458    7621 ubuntu.go:169] provisioning hostname "multinode-899000-m02"
	I0223 12:58:01.912554    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:01.970487    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:01.970880    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:01.970890    7621 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899000-m02 && echo "multinode-899000-m02" | sudo tee /etc/hostname
	I0223 12:58:02.113684    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899000-m02
	
	I0223 12:58:02.113789    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:02.170964    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:02.171323    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:02.171336    7621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 12:58:02.304072    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 12:58:02.304095    7621 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-825/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-825/.minikube}
	I0223 12:58:02.304104    7621 ubuntu.go:177] setting up certificates
	I0223 12:58:02.304110    7621 provision.go:83] configureAuth start
	I0223 12:58:02.304186    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000-m02
	I0223 12:58:02.362032    7621 provision.go:138] copyHostCerts
	I0223 12:58:02.362087    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:58:02.362146    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem, removing ...
	I0223 12:58:02.362152    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:58:02.362253    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem (1078 bytes)
	I0223 12:58:02.362425    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:58:02.362461    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem, removing ...
	I0223 12:58:02.362466    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:58:02.362524    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem (1123 bytes)
	I0223 12:58:02.362643    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:58:02.362672    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem, removing ...
	I0223 12:58:02.362677    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:58:02.362748    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem (1675 bytes)
	I0223 12:58:02.362869    7621 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem org=jenkins.multinode-899000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-899000-m02]
	I0223 12:58:02.430743    7621 provision.go:172] copyRemoteCerts
	I0223 12:58:02.430801    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 12:58:02.430865    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:02.488615    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:02.583507    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 12:58:02.583585    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 12:58:02.601383    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 12:58:02.601471    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 12:58:02.618489    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 12:58:02.618586    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 12:58:02.635752    7621 provision.go:86] duration metric: configureAuth took 331.62292ms
	I0223 12:58:02.635768    7621 ubuntu.go:193] setting minikube options for container-runtime
	I0223 12:58:02.635922    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:58:02.635997    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:02.693216    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:02.693572    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:02.693582    7621 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 12:58:02.827446    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 12:58:02.827458    7621 ubuntu.go:71] root file system type: overlay
	I0223 12:58:02.827557    7621 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 12:58:02.827635    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:02.885944    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:02.886302    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:02.886359    7621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 12:58:03.029517    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 12:58:03.029624    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:03.088025    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:03.088389    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:03.088403    7621 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 12:58:03.731596    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 20:58:03.027503949 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 12:58:03.731621    7621 machine.go:91] provisioned docker machine in 1.819152297s
	I0223 12:58:03.731628    7621 client.go:171] LocalClient.Create took 10.078604193s
	I0223 12:58:03.731643    7621 start.go:167] duration metric: libmachine.API.Create for "multinode-899000" took 10.078665842s
	I0223 12:58:03.731649    7621 start.go:300] post-start starting for "multinode-899000-m02" (driver="docker")
	I0223 12:58:03.731653    7621 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 12:58:03.731739    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 12:58:03.731794    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:03.789461    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:03.884866    7621 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 12:58:03.888318    7621 command_runner.go:130] > NAME="Ubuntu"
	I0223 12:58:03.888327    7621 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 12:58:03.888331    7621 command_runner.go:130] > ID=ubuntu
	I0223 12:58:03.888352    7621 command_runner.go:130] > ID_LIKE=debian
	I0223 12:58:03.888357    7621 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 12:58:03.888360    7621 command_runner.go:130] > VERSION_ID="20.04"
	I0223 12:58:03.888365    7621 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 12:58:03.888369    7621 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 12:58:03.888374    7621 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 12:58:03.888388    7621 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 12:58:03.888394    7621 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 12:58:03.888399    7621 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 12:58:03.888452    7621 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 12:58:03.888463    7621 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 12:58:03.888486    7621 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 12:58:03.888493    7621 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 12:58:03.888499    7621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/addons for local assets ...
	I0223 12:58:03.888588    7621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/files for local assets ...
	I0223 12:58:03.888742    7621 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> 20572.pem in /etc/ssl/certs
	I0223 12:58:03.888750    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /etc/ssl/certs/20572.pem
	I0223 12:58:03.888923    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 12:58:03.896128    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:58:03.912955    7621 start.go:303] post-start completed in 181.294649ms
	I0223 12:58:03.913482    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000-m02
	I0223 12:58:03.969733    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:58:03.970137    7621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 12:58:03.970191    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:04.028655    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:04.119853    7621 command_runner.go:130] > 6%!
	(MISSING)I0223 12:58:04.119927    7621 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 12:58:04.124350    7621 command_runner.go:130] > 99G
	I0223 12:58:04.124620    7621 start.go:128] duration metric: createHost completed in 10.49384776s
	I0223 12:58:04.124634    7621 start.go:83] releasing machines lock for "multinode-899000-m02", held for 10.49395392s
	I0223 12:58:04.124724    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000-m02
	I0223 12:58:04.204672    7621 out.go:177] * Found network options:
	I0223 12:58:04.226041    7621 out.go:177]   - NO_PROXY=192.168.58.2
	W0223 12:58:04.247484    7621 proxy.go:119] fail to check proxy env: Error ip not in block
	W0223 12:58:04.247521    7621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 12:58:04.247640    7621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 12:58:04.247647    7621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 12:58:04.247712    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:04.247723    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:04.308866    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:04.308860    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:04.455176    7621 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 12:58:04.455202    7621 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 12:58:04.455208    7621 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 12:58:04.455213    7621 command_runner.go:130] > Device: 10001ch/1048604d	Inode: 2229761     Links: 1
	I0223 12:58:04.455219    7621 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 12:58:04.455225    7621 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 12:58:04.455231    7621 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 12:58:04.455236    7621 command_runner.go:130] > Change: 2023-02-23 20:33:52.692471760 +0000
	I0223 12:58:04.455239    7621 command_runner.go:130] >  Birth: -
	I0223 12:58:04.455315    7621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 12:58:04.476107    7621 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 12:58:04.476177    7621 ssh_runner.go:195] Run: which cri-dockerd
	I0223 12:58:04.480145    7621 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 12:58:04.480231    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 12:58:04.487548    7621 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 12:58:04.500409    7621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 12:58:04.514526    7621 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 12:58:04.514563    7621 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 12:58:04.514571    7621 start.go:485] detecting cgroup driver to use...
	I0223 12:58:04.514581    7621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:58:04.514653    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:58:04.527017    7621 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 12:58:04.527031    7621 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 12:58:04.527853    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 12:58:04.536504    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 12:58:04.544895    7621 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 12:58:04.544956    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 12:58:04.553381    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:58:04.561798    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 12:58:04.570360    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:58:04.578952    7621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 12:58:04.586968    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 12:58:04.595681    7621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 12:58:04.602353    7621 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 12:58:04.603050    7621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 12:58:04.610377    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:58:04.686472    7621 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 12:58:04.760664    7621 start.go:485] detecting cgroup driver to use...
	I0223 12:58:04.760684    7621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:58:04.760748    7621 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 12:58:04.770459    7621 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 12:58:04.770528    7621 command_runner.go:130] > [Unit]
	I0223 12:58:04.770543    7621 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 12:58:04.770565    7621 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 12:58:04.770578    7621 command_runner.go:130] > BindsTo=containerd.service
	I0223 12:58:04.770586    7621 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 12:58:04.770594    7621 command_runner.go:130] > Wants=network-online.target
	I0223 12:58:04.770604    7621 command_runner.go:130] > Requires=docker.socket
	I0223 12:58:04.770611    7621 command_runner.go:130] > StartLimitBurst=3
	I0223 12:58:04.770616    7621 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 12:58:04.770621    7621 command_runner.go:130] > [Service]
	I0223 12:58:04.770626    7621 command_runner.go:130] > Type=notify
	I0223 12:58:04.770629    7621 command_runner.go:130] > Restart=on-failure
	I0223 12:58:04.770633    7621 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0223 12:58:04.770641    7621 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 12:58:04.770652    7621 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 12:58:04.770657    7621 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 12:58:04.770663    7621 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 12:58:04.770669    7621 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 12:58:04.770676    7621 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 12:58:04.770683    7621 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 12:58:04.770693    7621 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 12:58:04.770701    7621 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 12:58:04.770704    7621 command_runner.go:130] > ExecStart=
	I0223 12:58:04.770718    7621 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 12:58:04.770723    7621 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 12:58:04.770728    7621 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 12:58:04.770735    7621 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 12:58:04.770744    7621 command_runner.go:130] > LimitNOFILE=infinity
	I0223 12:58:04.770749    7621 command_runner.go:130] > LimitNPROC=infinity
	I0223 12:58:04.770754    7621 command_runner.go:130] > LimitCORE=infinity
	I0223 12:58:04.770758    7621 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 12:58:04.770762    7621 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 12:58:04.770766    7621 command_runner.go:130] > TasksMax=infinity
	I0223 12:58:04.770769    7621 command_runner.go:130] > TimeoutStartSec=0
	I0223 12:58:04.770774    7621 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 12:58:04.770778    7621 command_runner.go:130] > Delegate=yes
	I0223 12:58:04.770788    7621 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 12:58:04.770792    7621 command_runner.go:130] > KillMode=process
	I0223 12:58:04.770795    7621 command_runner.go:130] > [Install]
	I0223 12:58:04.770799    7621 command_runner.go:130] > WantedBy=multi-user.target
	I0223 12:58:04.771410    7621 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 12:58:04.771475    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 12:58:04.781576    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:58:04.794326    7621 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 12:58:04.794339    7621 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 12:58:04.795123    7621 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 12:58:04.875293    7621 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 12:58:04.970597    7621 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 12:58:04.970615    7621 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 12:58:04.983789    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:58:05.072300    7621 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 12:58:05.292127    7621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 12:58:05.367710    7621 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 12:58:05.367778    7621 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 12:58:05.433730    7621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 12:58:05.505014    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:58:05.580488    7621 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 12:58:05.609862    7621 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 12:58:05.609951    7621 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 12:58:05.614176    7621 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 12:58:05.614187    7621 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 12:58:05.614195    7621 command_runner.go:130] > Device: 100024h/1048612d	Inode: 206         Links: 1
	I0223 12:58:05.614202    7621 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 12:58:05.614211    7621 command_runner.go:130] > Access: 2023-02-23 20:58:05.588503925 +0000
	I0223 12:58:05.614217    7621 command_runner.go:130] > Modify: 2023-02-23 20:58:05.588503925 +0000
	I0223 12:58:05.614221    7621 command_runner.go:130] > Change: 2023-02-23 20:58:05.606503924 +0000
	I0223 12:58:05.614226    7621 command_runner.go:130] >  Birth: -
	I0223 12:58:05.614246    7621 start.go:553] Will wait 60s for crictl version
	I0223 12:58:05.614285    7621 ssh_runner.go:195] Run: which crictl
	I0223 12:58:05.618012    7621 command_runner.go:130] > /usr/bin/crictl
	I0223 12:58:05.618085    7621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 12:58:05.713742    7621 command_runner.go:130] > Version:  0.1.0
	I0223 12:58:05.713755    7621 command_runner.go:130] > RuntimeName:  docker
	I0223 12:58:05.713759    7621 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 12:58:05.713766    7621 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 12:58:05.715686    7621 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 12:58:05.715762    7621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:58:05.738599    7621 command_runner.go:130] > 23.0.1
	I0223 12:58:05.740332    7621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:58:05.763944    7621 command_runner.go:130] > 23.0.1
	I0223 12:58:05.809278    7621 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 12:58:05.831233    7621 out.go:177]   - env NO_PROXY=192.168.58.2
	I0223 12:58:05.853423    7621 cli_runner.go:164] Run: docker exec -t multinode-899000-m02 dig +short host.docker.internal
	I0223 12:58:05.968826    7621 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 12:58:05.968940    7621 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 12:58:05.973308    7621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:58:05.983443    7621 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000 for IP: 192.168.58.3
	I0223 12:58:05.983459    7621 certs.go:186] acquiring lock for shared ca certs: {Name:mk9b7a98958f4333f06cfa6d87963d4d7f2b94cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:58:05.983636    7621 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key
	I0223 12:58:05.983693    7621 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key
	I0223 12:58:05.983703    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 12:58:05.983725    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 12:58:05.983744    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 12:58:05.983763    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 12:58:05.983846    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem (1338 bytes)
	W0223 12:58:05.983887    7621 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057_empty.pem, impossibly tiny 0 bytes
	I0223 12:58:05.983897    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 12:58:05.983941    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem (1078 bytes)
	I0223 12:58:05.983981    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem (1123 bytes)
	I0223 12:58:05.984022    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem (1675 bytes)
	I0223 12:58:05.984103    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:58:05.984136    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /usr/share/ca-certificates/20572.pem
	I0223 12:58:05.984157    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:05.984175    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem -> /usr/share/ca-certificates/2057.pem
	I0223 12:58:05.984499    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 12:58:06.001954    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 12:58:06.018933    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 12:58:06.036127    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 12:58:06.053361    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /usr/share/ca-certificates/20572.pem (1708 bytes)
	I0223 12:58:06.070584    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 12:58:06.087761    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem --> /usr/share/ca-certificates/2057.pem (1338 bytes)
	I0223 12:58:06.105036    7621 ssh_runner.go:195] Run: openssl version
	I0223 12:58:06.110267    7621 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 12:58:06.110537    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 12:58:06.118593    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:06.122428    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:06.122444    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:06.122493    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:06.127668    7621 command_runner.go:130] > b5213941
	I0223 12:58:06.128051    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 12:58:06.135997    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2057.pem && ln -fs /usr/share/ca-certificates/2057.pem /etc/ssl/certs/2057.pem"
	I0223 12:58:06.143987    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2057.pem
	I0223 12:58:06.147936    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:58:06.148008    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:58:06.148068    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2057.pem
	I0223 12:58:06.153153    7621 command_runner.go:130] > 51391683
	I0223 12:58:06.153487    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2057.pem /etc/ssl/certs/51391683.0"
	I0223 12:58:06.161545    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20572.pem && ln -fs /usr/share/ca-certificates/20572.pem /etc/ssl/certs/20572.pem"
	I0223 12:58:06.169703    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20572.pem
	I0223 12:58:06.173494    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:58:06.173519    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:58:06.173562    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20572.pem
	I0223 12:58:06.178838    7621 command_runner.go:130] > 3ec20f2e
	I0223 12:58:06.179062    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20572.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 12:58:06.187248    7621 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 12:58:06.209788    7621 command_runner.go:130] > cgroupfs
	I0223 12:58:06.211461    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:58:06.211475    7621 cni.go:136] 2 nodes found, recommending kindnet
	I0223 12:58:06.211483    7621 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 12:58:06.211498    7621 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899000 NodeName:multinode-899000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 12:58:06.211590    7621 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-899000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 12:58:06.211640    7621 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-899000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 12:58:06.211706    7621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 12:58:06.218964    7621 command_runner.go:130] > kubeadm
	I0223 12:58:06.218973    7621 command_runner.go:130] > kubectl
	I0223 12:58:06.218977    7621 command_runner.go:130] > kubelet
	I0223 12:58:06.219673    7621 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 12:58:06.219735    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0223 12:58:06.227058    7621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0223 12:58:06.239739    7621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 12:58:06.252910    7621 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 12:58:06.256788    7621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:58:06.266829    7621 host.go:66] Checking if "multinode-899000" exists ...
	I0223 12:58:06.267002    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:58:06.267014    7621 start.go:301] JoinCluster: &{Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:58:06.267068    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0223 12:58:06.267120    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:58:06.326048    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:58:06.487311    7621 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 898us1.ihys6g4jwq17jiqx --discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 
	I0223 12:58:06.487352    7621 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 12:58:06.487370    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 898us1.ihys6g4jwq17jiqx --discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899000-m02"
	I0223 12:58:06.528112    7621 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 12:58:06.638474    7621 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0223 12:58:06.638493    7621 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0223 12:58:06.662311    7621 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 12:58:06.662325    7621 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 12:58:06.662330    7621 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 12:58:06.733207    7621 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0223 12:58:20.247375    7621 command_runner.go:130] > This node has joined the cluster:
	I0223 12:58:20.247395    7621 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0223 12:58:20.247403    7621 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0223 12:58:20.247412    7621 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0223 12:58:20.250700    7621 command_runner.go:130] ! W0223 20:58:06.527320    1238 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 12:58:20.250718    7621 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 12:58:20.250730    7621 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 12:58:20.250745    7621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 898us1.ihys6g4jwq17jiqx --discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899000-m02": (13.763113582s)
	I0223 12:58:20.250764    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0223 12:58:20.395854    7621 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0223 12:58:20.395873    7621 start.go:303] JoinCluster complete in 14.128603796s
	I0223 12:58:20.395881    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:58:20.395886    7621 cni.go:136] 2 nodes found, recommending kindnet
	I0223 12:58:20.395975    7621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 12:58:20.399941    7621 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 12:58:20.399951    7621 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 12:58:20.399960    7621 command_runner.go:130] > Device: a6h/166d	Inode: 2102733     Links: 1
	I0223 12:58:20.399965    7621 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 12:58:20.399973    7621 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 12:58:20.399978    7621 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 12:58:20.399982    7621 command_runner.go:130] > Change: 2023-02-23 20:33:51.991471766 +0000
	I0223 12:58:20.399991    7621 command_runner.go:130] >  Birth: -
	I0223 12:58:20.400070    7621 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 12:58:20.400080    7621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 12:58:20.413157    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 12:58:20.601827    7621 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 12:58:20.604293    7621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 12:58:20.606049    7621 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 12:58:20.614565    7621 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 12:58:20.621583    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:58:20.621803    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:58:20.622052    7621 round_trippers.go:463] GET https://127.0.0.1:51104/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 12:58:20.622058    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:20.622065    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:20.622070    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:20.624633    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:20.624643    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:20.624649    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:20.624655    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:20.624662    7621 round_trippers.go:580]     Content-Length: 291
	I0223 12:58:20.624667    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:20 GMT
	I0223 12:58:20.624673    7621 round_trippers.go:580]     Audit-Id: 67222638-afd0-4c38-84b6-7f76484aec80
	I0223 12:58:20.624678    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:20.624683    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:20.624696    7621 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"436","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 12:58:20.624738    7621 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-899000" context rescaled to 1 replicas
	I0223 12:58:20.624752    7621 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 12:58:20.647179    7621 out.go:177] * Verifying Kubernetes components...
	I0223 12:58:20.688288    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:58:20.700123    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:58:20.759136    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:58:20.759372    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:58:20.759603    7621 node_ready.go:35] waiting up to 6m0s for node "multinode-899000-m02" to be "Ready" ...
	I0223 12:58:20.759643    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:20.759647    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:20.759654    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:20.759659    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:20.762524    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:20.762539    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:20.762546    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:20.762551    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:20.762556    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:20 GMT
	I0223 12:58:20.762562    7621 round_trippers.go:580]     Audit-Id: 8634821f-f6a2-4fcd-8192-70855326ddcd
	I0223 12:58:20.762567    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:20.762572    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:20.762649    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"481","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58
:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations" [truncated 3841 chars]
	I0223 12:58:21.263739    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:21.263760    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:21.263772    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:21.263782    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:21.267075    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:21.267092    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:21.267100    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:21 GMT
	I0223 12:58:21.267108    7621 round_trippers.go:580]     Audit-Id: 233e4302-1887-456a-90e8-1a49f891fccd
	I0223 12:58:21.267131    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:21.267136    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:21.267142    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:21.267146    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:21.267208    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"481","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58
:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations" [truncated 3841 chars]
	I0223 12:58:21.763599    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:21.763625    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:21.763637    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:21.763740    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:21.767608    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:21.767620    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:21.767626    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:21.767631    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:21 GMT
	I0223 12:58:21.767636    7621 round_trippers.go:580]     Audit-Id: 260715fd-ae7a-4a7e-a346-9a7f64a75ed4
	I0223 12:58:21.767641    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:21.767646    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:21.767651    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:21.767721    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"481","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58
:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations" [truncated 3841 chars]
	I0223 12:58:22.263413    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:22.281474    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.281490    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.281512    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.285120    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:22.285135    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.285143    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.285179    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.285196    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.285208    7621 round_trippers.go:580]     Audit-Id: cb08a44f-4a0c-4330-8e85-d3367c73fc0f
	I0223 12:58:22.285218    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.285228    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.285643    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:22.285851    7621 node_ready.go:49] node "multinode-899000-m02" has status "Ready":"True"
	I0223 12:58:22.285862    7621 node_ready.go:38] duration metric: took 1.526223237s waiting for node "multinode-899000-m02" to be "Ready" ...
	I0223 12:58:22.285869    7621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 12:58:22.285912    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:58:22.285917    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.285923    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.285928    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.289467    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:22.289486    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.289495    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.289503    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.289510    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.289523    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.289534    7621 round_trippers.go:580]     Audit-Id: 0fa86289-091a-4f86-b936-ad688159d7dc
	I0223 12:58:22.289543    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.290808    7621 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68605 chars]
	I0223 12:58:22.292404    7621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-255qk" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.292441    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:58:22.292446    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.292453    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.292459    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.294734    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.294743    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.294748    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.294753    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.294759    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.294766    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.294771    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.294776    7621 round_trippers.go:580]     Audit-Id: 8d9614e6-a3aa-4f15-a23f-a07d69b29326
	I0223 12:58:22.294870    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 12:58:22.295132    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:22.295138    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.295144    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.295150    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.297169    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.297178    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.297183    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.297188    7621 round_trippers.go:580]     Audit-Id: 813e9c20-c4df-4924-9883-44e58a351344
	I0223 12:58:22.297193    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.297198    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.297203    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.297208    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.297264    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:22.297447    7621 pod_ready.go:92] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:22.297453    7621 pod_ready.go:81] duration metric: took 5.040627ms waiting for pod "coredns-787d4945fb-255qk" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.297458    7621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.297500    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/etcd-multinode-899000
	I0223 12:58:22.297506    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.297512    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.297518    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.299533    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.299543    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.299550    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.299555    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.299561    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.299566    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.299571    7621 round_trippers.go:580]     Audit-Id: 83c24849-e5e1-411b-9327-17c90855767c
	I0223 12:58:22.299578    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.299648    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899000","namespace":"kube-system","uid":"04c36b20-3f1c-4967-be88-dfaf04e459fb","resourceVersion":"273","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"566ae0c6f1e5eb2cbf1380e3d7174fa3","kubernetes.io/config.mirror":"566ae0c6f1e5eb2cbf1380e3d7174fa3","kubernetes.io/config.seen":"2023-02-23T20:57:22.892805434Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 12:58:22.299861    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:22.299867    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.299873    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.299889    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.301604    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:22.301613    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.301620    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.301625    7621 round_trippers.go:580]     Audit-Id: a54c5ced-d173-4e56-933c-c25de720af53
	I0223 12:58:22.301631    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.301636    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.301641    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.301646    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.301713    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:22.301882    7621 pod_ready.go:92] pod "etcd-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:22.301888    7621 pod_ready.go:81] duration metric: took 4.424711ms waiting for pod "etcd-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.301896    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.301922    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899000
	I0223 12:58:22.301927    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.301933    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.301939    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.304007    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.304016    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.304021    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.304026    7621 round_trippers.go:580]     Audit-Id: 2ff56055-f0d7-4f5b-b20e-b2d0740dfd26
	I0223 12:58:22.304035    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.304041    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.304047    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.304053    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.304140    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899000","namespace":"kube-system","uid":"8f2e9b4f-7407-4a4f-86d7-cbaa54f4982b","resourceVersion":"275","creationTimestamp":"2023-02-23T20:57:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"04b8445a9cf4f56fec75b4c565d27f23","kubernetes.io/config.mirror":"04b8445a9cf4f56fec75b4c565d27f23","kubernetes.io/config.seen":"2023-02-23T20:57:13.277278836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 12:58:22.304381    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:22.304387    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.304393    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.304398    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.306610    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.306619    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.306626    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.306632    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.306639    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.306644    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.306649    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.306654    7621 round_trippers.go:580]     Audit-Id: 1eac0b7b-1910-496b-a4bf-3d17e072d626
	I0223 12:58:22.306698    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:22.306872    7621 pod_ready.go:92] pod "kube-apiserver-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:22.306879    7621 pod_ready.go:81] duration metric: took 4.977088ms waiting for pod "kube-apiserver-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.306884    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.306911    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899000
	I0223 12:58:22.306915    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.306921    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.306927    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.309090    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.309099    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.309106    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.309111    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.309117    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.309122    7621 round_trippers.go:580]     Audit-Id: 74dac49a-2231-4156-a29e-7edf55e4d2ac
	I0223 12:58:22.309127    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.309132    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.309295    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899000","namespace":"kube-system","uid":"8a9821eb-106e-43fb-919d-59f0d6132887","resourceVersion":"301","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02827c95207bba4f962be58bf081b453","kubernetes.io/config.mirror":"02827c95207bba4f962be58bf081b453","kubernetes.io/config.seen":"2023-02-23T20:57:22.892794347Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 12:58:22.309545    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:22.309552    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.309559    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.309567    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.311558    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:22.311567    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.311573    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.311578    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.311584    7621 round_trippers.go:580]     Audit-Id: ffa67e40-e0c4-43d9-aa6c-3e693de04adc
	I0223 12:58:22.311597    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.311603    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.311608    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.311660    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:22.311869    7621 pod_ready.go:92] pod "kube-controller-manager-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:22.311875    7621 pod_ready.go:81] duration metric: took 4.985214ms waiting for pod "kube-controller-manager-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.311880    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s4pvs" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.463487    7621 request.go:622] Waited for 151.531938ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:22.463521    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:22.463525    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.463534    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.463547    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.466180    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.466203    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.466213    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.466219    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.466224    7621 round_trippers.go:580]     Audit-Id: e0b73299-3ede-4e4b-9370-6efa33f6aecc
	I0223 12:58:22.466230    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.466246    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.466257    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.466687    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:22.663549    7621 request.go:622] Waited for 196.554283ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:22.663644    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:22.663656    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.663672    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.663688    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.667542    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:22.667558    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.667569    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.667600    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.667612    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.667618    7621 round_trippers.go:580]     Audit-Id: d523eead-0b5f-4ce1-911b-1926498d8550
	I0223 12:58:22.667625    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.667632    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.667711    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:23.169160    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:23.169176    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:23.169185    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:23.169192    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:23.172366    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:23.172379    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:23.172385    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:23.172391    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:23 GMT
	I0223 12:58:23.172396    7621 round_trippers.go:580]     Audit-Id: 2f163b7b-90fd-467d-ad4a-a387c8d49e2b
	I0223 12:58:23.172402    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:23.172407    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:23.172412    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:23.172471    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:23.172731    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:23.172738    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:23.172744    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:23.172749    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:23.174504    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:23.174516    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:23.174526    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:23.174532    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:23.174537    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:23 GMT
	I0223 12:58:23.174542    7621 round_trippers.go:580]     Audit-Id: ecfc0f37-e6b7-4ac6-8f1e-18862a85d247
	I0223 12:58:23.174554    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:23.174559    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:23.174703    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:23.670013    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:23.670038    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:23.670051    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:23.670061    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:23.674512    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:58:23.674525    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:23.674530    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:23 GMT
	I0223 12:58:23.674541    7621 round_trippers.go:580]     Audit-Id: 056aa36f-0c10-4ae0-9bf4-ca03416aa192
	I0223 12:58:23.674547    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:23.674551    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:23.674564    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:23.674569    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:23.674637    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:23.674900    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:23.674906    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:23.674912    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:23.674931    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:23.677078    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:23.677088    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:23.677094    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:23.677099    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:23 GMT
	I0223 12:58:23.677104    7621 round_trippers.go:580]     Audit-Id: ff8172ff-b1fe-4a8b-b7af-56374f7cdb48
	I0223 12:58:23.677109    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:23.677114    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:23.677120    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:23.677159    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:24.168156    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:24.168173    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:24.168194    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:24.168203    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:24.171200    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:24.171214    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:24.171221    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:24.171227    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:24.171233    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:24.171241    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:24 GMT
	I0223 12:58:24.171246    7621 round_trippers.go:580]     Audit-Id: 28a433ab-c23e-4cea-91d9-4cd9d5678c1c
	I0223 12:58:24.171251    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:24.171323    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:24.171729    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:24.171736    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:24.171742    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:24.171748    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:24.173941    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:24.173954    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:24.173961    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:24 GMT
	I0223 12:58:24.173966    7621 round_trippers.go:580]     Audit-Id: 923e8375-07b2-49ae-938b-d9fe78c92800
	I0223 12:58:24.173971    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:24.173976    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:24.173981    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:24.173986    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:24.174461    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:24.668503    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:24.668532    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:24.668547    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:24.668558    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:24.672135    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:24.672146    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:24.672151    7621 round_trippers.go:580]     Audit-Id: 399b9bee-22b3-40a3-81c4-511834fd3059
	I0223 12:58:24.672171    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:24.672182    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:24.672188    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:24.672193    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:24.672198    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:24 GMT
	I0223 12:58:24.672261    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:24.672510    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:24.672515    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:24.672521    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:24.672526    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:24.674668    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:24.674677    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:24.674683    7621 round_trippers.go:580]     Audit-Id: be7c54a9-26d0-4c17-82e3-057e89bf33af
	I0223 12:58:24.674688    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:24.674694    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:24.674699    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:24.674704    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:24.674709    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:24 GMT
	I0223 12:58:24.674760    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:24.674928    7621 pod_ready.go:102] pod "kube-proxy-s4pvs" in "kube-system" namespace has status "Ready":"False"
	I0223 12:58:25.169265    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:25.169286    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:25.169297    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:25.169305    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:25.173798    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:58:25.173815    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:25.173823    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:25.173830    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:25.173837    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:25.173845    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:25.173852    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:25 GMT
	I0223 12:58:25.173860    7621 round_trippers.go:580]     Audit-Id: 31c361dc-6fca-45f2-9337-041e6a2218c9
	I0223 12:58:25.173955    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:25.174291    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:25.174307    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:25.174317    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:25.174324    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:25.177082    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:25.177099    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:25.177110    7621 round_trippers.go:580]     Audit-Id: fb6c328b-26d6-4b6e-9966-0f8d2292d414
	I0223 12:58:25.177119    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:25.177132    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:25.177145    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:25.177156    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:25.177164    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:25 GMT
	I0223 12:58:25.177304    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"493","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4014 chars]
	I0223 12:58:25.670178    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:25.670203    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:25.670215    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:25.670226    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:25.674069    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:25.674081    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:25.674087    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:25.674093    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:25.674101    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:25.674108    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:25 GMT
	I0223 12:58:25.674113    7621 round_trippers.go:580]     Audit-Id: e714f7e2-ccbc-4485-9683-8c7dbe3439ae
	I0223 12:58:25.674118    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:25.674177    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:25.674456    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:25.674462    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:25.674468    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:25.674475    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:25.676559    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:25.676569    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:25.676574    7621 round_trippers.go:580]     Audit-Id: bc4c3a54-5c55-402a-bb8d-b407c0267a1b
	I0223 12:58:25.676580    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:25.676585    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:25.676590    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:25.676597    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:25.676602    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:25 GMT
	I0223 12:58:25.676648    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"493","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4014 chars]
	I0223 12:58:26.168415    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:26.168431    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.168440    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.168447    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.171570    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:26.171583    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.171589    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.171601    7621 round_trippers.go:580]     Audit-Id: d70b06b6-a2e6-4916-a401-d314acfe5894
	I0223 12:58:26.171607    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.171612    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.171617    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.171622    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.171691    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:26.171940    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:26.171946    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.171952    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.171957    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.174239    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.174249    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.174255    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.174260    7621 round_trippers.go:580]     Audit-Id: a66cc2c1-1124-4166-942e-c679f1ef9f61
	I0223 12:58:26.174267    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.174273    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.174279    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.174285    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.174338    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"493","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4014 chars]
	I0223 12:58:26.668178    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:26.668193    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.668202    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.668209    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.671016    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.671026    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.671031    7621 round_trippers.go:580]     Audit-Id: d1826630-db1e-4ae5-a106-a40117931893
	I0223 12:58:26.671037    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.671043    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.671048    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.671053    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.671058    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.671114    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"499","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 12:58:26.671369    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:26.671376    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.671384    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.671392    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.673360    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:26.673369    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.673375    7621 round_trippers.go:580]     Audit-Id: 2c082fdf-5659-4f79-bf1b-49ad416038a2
	I0223 12:58:26.673380    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.673385    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.673390    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.673396    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.673401    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.673448    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"493","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4014 chars]
	I0223 12:58:26.673605    7621 pod_ready.go:92] pod "kube-proxy-s4pvs" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:26.673615    7621 pod_ready.go:81] duration metric: took 4.361651931s waiting for pod "kube-proxy-s4pvs" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.673621    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w885m" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.673649    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-w885m
	I0223 12:58:26.673660    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.673666    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.673672    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.675790    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.675799    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.675804    7621 round_trippers.go:580]     Audit-Id: 0c849d0d-2d41-474c-97e4-77c06ce32938
	I0223 12:58:26.675809    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.675814    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.675818    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.675823    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.675828    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.676087    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w885m","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e1284e2-dcb3-408c-bc90-a501107f7e23","resourceVersion":"397","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 12:58:26.676334    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:26.676340    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.676346    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.676352    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.678207    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:26.678217    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.678222    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.678227    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.678232    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.678237    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.678242    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.678247    7621 round_trippers.go:580]     Audit-Id: b6a72a27-6d6f-4552-9a0b-c09c13cc1b60
	I0223 12:58:26.678297    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:26.678477    7621 pod_ready.go:92] pod "kube-proxy-w885m" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:26.678483    7621 pod_ready.go:81] duration metric: took 4.857735ms waiting for pod "kube-proxy-w885m" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.678489    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.678516    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899000
	I0223 12:58:26.678520    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.678525    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.678535    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.681020    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.681034    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.681043    7621 round_trippers.go:580]     Audit-Id: 9922716d-f364-401e-a315-abcb6d6ee5a1
	I0223 12:58:26.681049    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.681055    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.681068    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.681074    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.681079    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.681134    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899000","namespace":"kube-system","uid":"b864a38e-68d2-4949-92a9-0f736cbdf7fe","resourceVersion":"296","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bad6109cbec6cd514239122749558677","kubernetes.io/config.mirror":"bad6109cbec6cd514239122749558677","kubernetes.io/config.seen":"2023-02-23T20:57:22.892804438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 12:58:26.681340    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:26.681347    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.681352    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.681358    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.683441    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.683450    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.683455    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.683462    7621 round_trippers.go:580]     Audit-Id: 84aa2aba-0871-4a2d-907f-4d1b1b3321fa
	I0223 12:58:26.683467    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.683472    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.683477    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.683482    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.683531    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:26.683707    7621 pod_ready.go:92] pod "kube-scheduler-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:26.683713    7621 pod_ready.go:81] duration metric: took 5.219031ms waiting for pod "kube-scheduler-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.683719    7621 pod_ready.go:38] duration metric: took 4.397762119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 12:58:26.683729    7621 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 12:58:26.683790    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:58:26.693999    7621 system_svc.go:56] duration metric: took 10.266093ms WaitForService to wait for kubelet.
	I0223 12:58:26.694012    7621 kubeadm.go:578] duration metric: took 6.0691297s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 12:58:26.694024    7621 node_conditions.go:102] verifying NodePressure condition ...
	I0223 12:58:26.864431    7621 request.go:622] Waited for 170.35072ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes
	I0223 12:58:26.864456    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes
	I0223 12:58:26.864461    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.864467    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.864480    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.867099    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.867110    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.867116    7621 round_trippers.go:580]     Audit-Id: 8ac1b85e-7612-4e27-94c3-795258ab68fa
	I0223 12:58:26.867121    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.867126    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.867131    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.867135    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.867141    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.867223    7621 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"501"},"items":[{"metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10175 chars]
	I0223 12:58:26.867530    7621 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0223 12:58:26.867542    7621 node_conditions.go:123] node cpu capacity is 6
	I0223 12:58:26.867548    7621 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0223 12:58:26.867552    7621 node_conditions.go:123] node cpu capacity is 6
	I0223 12:58:26.867555    7621 node_conditions.go:105] duration metric: took 173.524522ms to run NodePressure ...
	I0223 12:58:26.867563    7621 start.go:228] waiting for startup goroutines ...
	I0223 12:58:26.867585    7621 start.go:242] writing updated cluster config ...
	I0223 12:58:26.867895    7621 ssh_runner.go:195] Run: rm -f paused
	I0223 12:58:26.906037    7621 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 12:58:26.950565    7621 out.go:177] * Done! kubectl is now configured to use "multinode-899000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 20:57:05 UTC, end at Thu 2023-02-23 20:58:34 UTC. --
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317595816Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317615669Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317624695Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317678328Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317700895Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317750973Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317794393Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317867753Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317908973Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.318170773Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.318238674Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.318683224Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.326002821Z" level=info msg="Loading containers: start."
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.402284497Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.434330009Z" level=info msg="Loading containers: done."
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.442374964Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.442442869Z" level=info msg="Daemon has completed initialization"
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.462200298Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 20:57:09 multinode-899000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.466131795Z" level=info msg="API listen on [::]:2376"
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.472601456Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 20:57:50 multinode-899000 dockerd[831]: time="2023-02-23T20:57:50.560555858Z" level=info msg="ignoring event" container=6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 20:57:50 multinode-899000 dockerd[831]: time="2023-02-23T20:57:50.671502523Z" level=info msg="ignoring event" container=94788107a1e93da48536e32619b66fa9469e39a448fe8c3b0b247522d98cd443 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 20:57:50 multinode-899000 dockerd[831]: time="2023-02-23T20:57:50.786982860Z" level=info msg="ignoring event" container=2dbb1ff5944ec88f0c4829cd85418f0b56c5be224ce4e787b39d286e88707372 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 20:57:50 multinode-899000 dockerd[831]: time="2023-02-23T20:57:50.874477341Z" level=info msg="ignoring event" container=4f5a4c753a363cbe7fe0e463e5f59c0f384563f5ecb47b2847d94f12c34d7324 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	5525218a9e92a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   4 seconds ago        Running             busybox                   0                   e8013f02ecb87
	76bce82b7d450       5185b96f0becf                                                                                         43 seconds ago       Running             coredns                   1                   03e8a7447b139
	5b4de5d50db8f       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              55 seconds ago       Running             kindnet-cni               0                   d3e6dd0e53d06
	086926cf4bd23       6e38f40d628db                                                                                         57 seconds ago       Running             storage-provisioner       0                   ec2713b77469d
	2dbb1ff5944ec       5185b96f0becf                                                                                         57 seconds ago       Exited              coredns                   0                   4f5a4c753a363
	730147186f0db       46a6bb3c77ce0                                                                                         58 seconds ago       Running             kube-proxy                0                   102c80b0fd0ca
	4a8468b488876       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   2711c694901fd
	db112877a70a1       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   58092128f89d6
	ad8fcd7a26ca5       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   5a80b48095304
	8d0f71f04e8a7       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   921320e519fa2
	
	* 
	* ==> coredns [2dbb1ff5944e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 5394272695607976485.1833153103134811429. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 5394272695607976485.1833153103134811429. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [76bce82b7d45] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:41373 - 46368 "HINFO IN 5785576392753736130.8609393905576695230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015578364s
	[INFO] 10.244.0.3:41356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173355s
	[INFO] 10.244.0.3:57235 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.050066565s
	[INFO] 10.244.0.3:42102 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003603104s
	[INFO] 10.244.0.3:45013 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.012998473s
	[INFO] 10.244.0.3:59007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125346s
	[INFO] 10.244.0.3:34307 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005911723s
	[INFO] 10.244.0.3:37248 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000236478s
	[INFO] 10.244.0.3:51055 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104314s
	[INFO] 10.244.0.3:44170 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004655167s
	[INFO] 10.244.0.3:55871 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108021s
	[INFO] 10.244.0.3:41998 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117292s
	[INFO] 10.244.0.3:39672 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121114s
	[INFO] 10.244.0.3:48038 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153102s
	[INFO] 10.244.0.3:37055 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087004s
	[INFO] 10.244.0.3:46193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093886s
	[INFO] 10.244.0.3:40304 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074182s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-899000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7816f70daabe48630c945a757f21bf8d759fce7d
	                    minikube.k8s.io/name=multinode-899000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T12_57_23_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 20:57:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 20:58:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 20:57:53 +0000   Thu, 23 Feb 2023 20:57:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 20:57:53 +0000   Thu, 23 Feb 2023 20:57:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 20:57:53 +0000   Thu, 23 Feb 2023 20:57:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 20:57:53 +0000   Thu, 23 Feb 2023 20:57:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-899000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    ca13ab7a-8d3b-40f9-b8eb-210af75da760
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-c2dqh                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 coredns-787d4945fb-255qk                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     60s
	  kube-system                 etcd-multinode-899000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                 kindnet-gvns6                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      60s
	  kube-system                 kube-apiserver-multinode-899000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kube-system                 kube-controller-manager-multinode-899000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 kube-proxy-w885m                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-multinode-899000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  82s (x5 over 82s)  kubelet          Node multinode-899000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x3 over 82s)  kubelet          Node multinode-899000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x3 over 82s)  kubelet          Node multinode-899000 status is now: NodeHasSufficientPID
	  Normal  Starting                 73s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  72s                kubelet          Node multinode-899000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    72s                kubelet          Node multinode-899000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     72s                kubelet          Node multinode-899000 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             72s                kubelet          Node multinode-899000 status is now: NodeNotReady
	  Normal  NodeReady                62s                kubelet          Node multinode-899000 status is now: NodeReady
	  Normal  RegisteredNode           61s                node-controller  Node multinode-899000 event: Registered Node multinode-899000 in Controller
	
	
	Name:               multinode-899000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 20:58:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 20:58:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 20:58:22 +0000   Thu, 23 Feb 2023 20:58:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 20:58:22 +0000   Thu, 23 Feb 2023 20:58:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 20:58:22 +0000   Thu, 23 Feb 2023 20:58:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 20:58:22 +0000   Thu, 23 Feb 2023 20:58:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-899000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    ca13ab7a-8d3b-40f9-b8eb-210af75da760
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-8hfr6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7s
	  kube-system                 kindnet-xk4c6               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      16s
	  kube-system                 kube-proxy-s4pvs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9s                 kube-proxy       
	  Normal  RegisteredNode           16s                node-controller  Node multinode-899000-m02 event: Registered Node multinode-899000-m02 in Controller
	  Normal  NodeHasSufficientMemory  16s (x8 over 28s)  kubelet          Node multinode-899000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16s (x8 over 28s)  kubelet          Node multinode-899000-m02 status is now: NodeHasNoDiskPressure
	
	* 
	* ==> dmesg <==
	* [  +0.000067] FS-Cache: O-key=[8] '74557e0500000000'
	[  +0.000037] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000059] FS-Cache: N-cookie d=00000000df813808{9p.inode} n=0000000066e9be13
	[  +0.000143] FS-Cache: N-key=[8] '74557e0500000000'
	[  +0.003159] FS-Cache: Duplicate cookie detected
	[  +0.000048] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000046] FS-Cache: O-cookie d=00000000df813808{9p.inode} n=00000000f5fd9442
	[  +0.000058] FS-Cache: O-key=[8] '74557e0500000000'
	[  +0.000052] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000034] FS-Cache: N-cookie d=00000000df813808{9p.inode} n=000000002aa4df33
	[  +0.000075] FS-Cache: N-key=[8] '74557e0500000000'
	[  +3.589013] FS-Cache: Duplicate cookie detected
	[  +0.000046] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000033] FS-Cache: O-cookie d=00000000df813808{9p.inode} n=0000000081eaffce
	[  +0.000086] FS-Cache: O-key=[8] '73557e0500000000'
	[  +0.000053] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000065] FS-Cache: N-cookie d=00000000df813808{9p.inode} n=00000000555cd28a
	[  +0.000052] FS-Cache: N-key=[8] '73557e0500000000'
	[  +0.394725] FS-Cache: Duplicate cookie detected
	[  +0.000039] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000057] FS-Cache: O-cookie d=00000000df813808{9p.inode} n=000000009e5b0d36
	[  +0.000055] FS-Cache: O-key=[8] '85557e0500000000'
	[  +0.000046] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000058] FS-Cache: N-cookie d=00000000df813808{9p.inode} n=000000002aa4df33
	[  +0.000072] FS-Cache: N-key=[8] '85557e0500000000'
	
	* 
	* ==> etcd [4a8468b48887] <==
	* {"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T20:57:18.955Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T20:57:18.956Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-899000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T20:57:18.956Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T20:57:18.956Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T20:57:18.956Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T20:57:18.957Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T20:57:18.957Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T20:57:18.957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T20:57:18.957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T20:57:18.958Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T20:57:18.958Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-23T20:57:58.056Z","caller":"traceutil/trace.go:171","msg":"trace[1630010306] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"156.95355ms","start":"2023-02-23T20:57:57.899Z","end":"2023-02-23T20:57:58.056Z","steps":["trace[1630010306] 'process raft request'  (duration: 156.824827ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T20:58:00.285Z","caller":"traceutil/trace.go:171","msg":"trace[858322572] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"215.454279ms","start":"2023-02-23T20:58:00.069Z","end":"2023-02-23T20:58:00.285Z","steps":["trace[858322572] 'process raft request'  (duration: 215.306045ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:58:35 up 26 min,  0 users,  load average: 0.72, 1.04, 0.85
	Linux multinode-899000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [5b4de5d50db8] <==
	* I0223 20:57:40.236507       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0223 20:57:40.236581       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0223 20:57:40.236778       1 main.go:116] setting mtu 1500 for CNI 
	I0223 20:57:40.236798       1 main.go:146] kindnetd IP family: "ipv4"
	I0223 20:57:40.236815       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0223 20:57:40.637350       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:57:40.733613       1 main.go:227] handling current node
	I0223 20:57:50.740160       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:57:50.740212       1 main.go:227] handling current node
	I0223 20:58:00.752722       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:58:00.752762       1 main.go:227] handling current node
	I0223 20:58:10.756060       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:58:10.756102       1 main.go:227] handling current node
	I0223 20:58:20.767955       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:58:20.767994       1 main.go:227] handling current node
	I0223 20:58:20.768002       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 20:58:20.768009       1 main.go:250] Node multinode-899000-m02 has CIDR [10.244.1.0/24] 
	I0223 20:58:20.768106       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0223 20:58:30.772237       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:58:30.772313       1 main.go:227] handling current node
	I0223 20:58:30.772321       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 20:58:30.772325       1 main.go:250] Node multinode-899000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [ad8fcd7a26ca] <==
	* I0223 20:57:20.086891       1 cache.go:39] Caches are synced for autoregister controller
	I0223 20:57:20.086901       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 20:57:20.086910       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 20:57:20.087128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 20:57:20.087177       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 20:57:20.087425       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 20:57:20.133257       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0223 20:57:20.133259       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 20:57:20.143946       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 20:57:20.814463       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 20:57:20.991626       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0223 20:57:20.994342       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0223 20:57:20.994379       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 20:57:21.438083       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 20:57:21.465950       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 20:57:21.562181       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0223 20:57:21.567673       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0223 20:57:21.568688       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 20:57:21.571938       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 20:57:22.052799       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 20:57:22.805467       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 20:57:22.812872       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0223 20:57:22.819963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 20:57:35.741904       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0223 20:57:35.842422       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [db112877a70a] <==
	* I0223 20:57:35.002353       1 shared_informer.go:280] Caches are synced for disruption
	I0223 20:57:35.060218       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 20:57:35.074779       1 shared_informer.go:280] Caches are synced for stateful set
	I0223 20:57:35.090906       1 shared_informer.go:280] Caches are synced for daemon sets
	I0223 20:57:35.143634       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 20:57:35.458522       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 20:57:35.539480       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 20:57:35.539520       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 20:57:35.746743       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0223 20:57:35.848511       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w885m"
	I0223 20:57:35.850369       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gvns6"
	I0223 20:57:35.944023       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-255qk"
	I0223 20:57:35.948150       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-fllr8"
	I0223 20:57:36.066614       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0223 20:57:36.072457       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-fllr8"
	W0223 20:58:19.875734       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-899000-m02" does not exist
	I0223 20:58:19.879633       1 range_allocator.go:372] Set node multinode-899000-m02 PodCIDR to [10.244.1.0/24]
	I0223 20:58:19.882750       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s4pvs"
	I0223 20:58:19.885952       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xk4c6"
	W0223 20:58:19.891471       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-899000-m02. Assuming now as a timestamp.
	I0223 20:58:19.891626       1 event.go:294] "Event occurred" object="multinode-899000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-899000-m02 event: Registered Node multinode-899000-m02 in Controller"
	W0223 20:58:22.124444       1 topologycache.go:232] Can't get CPU or zone information for multinode-899000-m02 node
	I0223 20:58:28.058105       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 20:58:28.103984       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-8hfr6"
	I0223 20:58:28.114204       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-c2dqh"
	
	* 
	* ==> kube-proxy [730147186f0d] <==
	* I0223 20:57:36.983626       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0223 20:57:36.983715       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0223 20:57:36.983760       1 server_others.go:535] "Using iptables proxy"
	I0223 20:57:37.016107       1 server_others.go:176] "Using iptables Proxier"
	I0223 20:57:37.016152       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0223 20:57:37.016159       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0223 20:57:37.016175       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0223 20:57:37.016193       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 20:57:37.016844       1 server.go:655] "Version info" version="v1.26.1"
	I0223 20:57:37.016879       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 20:57:37.017415       1 config.go:226] "Starting endpoint slice config controller"
	I0223 20:57:37.017422       1 config.go:317] "Starting service config controller"
	I0223 20:57:37.017480       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 20:57:37.017481       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 20:57:37.033506       1 config.go:444] "Starting node config controller"
	I0223 20:57:37.033559       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 20:57:37.118554       1 shared_informer.go:280] Caches are synced for service config
	I0223 20:57:37.118595       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 20:57:37.133594       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8d0f71f04e8a] <==
	* W0223 20:57:20.051035       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 20:57:20.051048       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 20:57:20.051215       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 20:57:20.051242       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.051255       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.051259       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 20:57:20.051370       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 20:57:20.051381       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 20:57:20.860893       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.860951       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0223 20:57:20.886024       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 20:57:20.886111       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 20:57:20.943345       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 20:57:20.943391       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 20:57:20.943422       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.943434       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 20:57:20.973162       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.973225       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 20:57:21.134932       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 20:57:21.134993       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 20:57:21.234479       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 20:57:21.234565       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 20:57:21.238586       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 20:57:21.238626       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0223 20:57:21.647659       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 20:57:05 UTC, end at Thu 2023-02-23 20:58:36 UTC. --
	Feb 23 20:57:37 multinode-899000 kubelet[2151]: I0223 20:57:37.576720    2151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94788107a1e93da48536e32619b66fa9469e39a448fe8c3b0b247522d98cd443"
	Feb 23 20:57:37 multinode-899000 kubelet[2151]: I0223 20:57:37.859493    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fllr8" podStartSLOduration=2.859454877 pod.CreationTimestamp="2023-02-23 20:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:37.858705972 +0000 UTC m=+15.067993982" watchObservedRunningTime="2023-02-23 20:57:37.859454877 +0000 UTC m=+15.068742881"
	Feb 23 20:57:38 multinode-899000 kubelet[2151]: I0223 20:57:38.259367    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w885m" podStartSLOduration=3.2593411420000002 pod.CreationTimestamp="2023-02-23 20:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:38.259216574 +0000 UTC m=+15.468504578" watchObservedRunningTime="2023-02-23 20:57:38.259341142 +0000 UTC m=+15.468629146"
	Feb 23 20:57:38 multinode-899000 kubelet[2151]: I0223 20:57:38.659955    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.659930838 pod.CreationTimestamp="2023-02-23 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:38.659850511 +0000 UTC m=+15.869138519" watchObservedRunningTime="2023-02-23 20:57:38.659930838 +0000 UTC m=+15.869218841"
	Feb 23 20:57:40 multinode-899000 kubelet[2151]: I0223 20:57:40.654967    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-255qk" podStartSLOduration=5.654940557 pod.CreationTimestamp="2023-02-23 20:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:39.064113915 +0000 UTC m=+16.273401920" watchObservedRunningTime="2023-02-23 20:57:40.654940557 +0000 UTC m=+17.864228561"
	Feb 23 20:57:40 multinode-899000 kubelet[2151]: I0223 20:57:40.655104    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gvns6" podStartSLOduration=-9.223372031199686e+09 pod.CreationTimestamp="2023-02-23 20:57:35 +0000 UTC" firstStartedPulling="2023-02-23 20:57:36.991473098 +0000 UTC m=+14.200761098" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:40.65485295 +0000 UTC m=+17.864140959" watchObservedRunningTime="2023-02-23 20:57:40.655089167 +0000 UTC m=+17.864377171"
	Feb 23 20:57:43 multinode-899000 kubelet[2151]: I0223 20:57:43.559775    2151 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 23 20:57:43 multinode-899000 kubelet[2151]: I0223 20:57:43.560481    2151 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.746650    2151 scope.go:115] "RemoveContainer" containerID="6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.756619    2151 scope.go:115] "RemoveContainer" containerID="6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: E0223 20:57:50.757576    2151 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7" containerID="6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.757630    2151 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7} err="failed to get container status \"6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7\": rpc error: code = Unknown desc = Error: No such container: 6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.886272    2151 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64ntk\" (UniqueName: \"kubernetes.io/projected/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-kube-api-access-64ntk\") pod \"9f55cbe6-d30b-4575-96d6-0d79d5e6a97b\" (UID: \"9f55cbe6-d30b-4575-96d6-0d79d5e6a97b\") "
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.886337    2151 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-config-volume\") pod \"9f55cbe6-d30b-4575-96d6-0d79d5e6a97b\" (UID: \"9f55cbe6-d30b-4575-96d6-0d79d5e6a97b\") "
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: W0223 20:57:50.886465    2151 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.886580    2151 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f55cbe6-d30b-4575-96d6-0d79d5e6a97b" (UID: "9f55cbe6-d30b-4575-96d6-0d79d5e6a97b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.888384    2151 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-kube-api-access-64ntk" (OuterVolumeSpecName: "kube-api-access-64ntk") pod "9f55cbe6-d30b-4575-96d6-0d79d5e6a97b" (UID: "9f55cbe6-d30b-4575-96d6-0d79d5e6a97b"). InnerVolumeSpecName "kube-api-access-64ntk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.986964    2151 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-config-volume\") on node \"multinode-899000\" DevicePath \"\""
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.987016    2151 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-64ntk\" (UniqueName: \"kubernetes.io/projected/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-kube-api-access-64ntk\") on node \"multinode-899000\" DevicePath \"\""
	Feb 23 20:57:51 multinode-899000 kubelet[2151]: I0223 20:57:51.076428    2151 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9f55cbe6-d30b-4575-96d6-0d79d5e6a97b path="/var/lib/kubelet/pods/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b/volumes"
	Feb 23 20:57:51 multinode-899000 kubelet[2151]: I0223 20:57:51.765889    2151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f5a4c753a363cbe7fe0e463e5f59c0f384563f5ecb47b2847d94f12c34d7324"
	Feb 23 20:58:28 multinode-899000 kubelet[2151]: I0223 20:58:28.118887    2151 topology_manager.go:210] "Topology Admit Handler"
	Feb 23 20:58:28 multinode-899000 kubelet[2151]: E0223 20:58:28.119228    2151 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f55cbe6-d30b-4575-96d6-0d79d5e6a97b" containerName="coredns"
	Feb 23 20:58:28 multinode-899000 kubelet[2151]: I0223 20:58:28.119342    2151 memory_manager.go:346] "RemoveStaleState removing state" podUID="9f55cbe6-d30b-4575-96d6-0d79d5e6a97b" containerName="coredns"
	Feb 23 20:58:28 multinode-899000 kubelet[2151]: I0223 20:58:28.258167    2151 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkv5p\" (UniqueName: \"kubernetes.io/projected/c0b18eec-d8fe-4ce9-bc1f-74eae6a40582-kube-api-access-hkv5p\") pod \"busybox-6b86dd6d48-c2dqh\" (UID: \"c0b18eec-d8fe-4ce9-bc1f-74eae6a40582\") " pod="default/busybox-6b86dd6d48-c2dqh"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-899000 -n multinode-899000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-899000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (9.12s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (4.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: minikbue host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- sh -c "ping -c 1 <nil>"
multinode_test.go:558: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-8hfr6 -- sh -c "ping -c 1 <nil>": exit status 2 (152.60535ms)

                                                
                                                
** stderr ** 
	sh: syntax error: unexpected end of file
	command terminated with exit code 2

                                                
                                                
** /stderr **
multinode_test.go:559: Failed to ping host (<nil>) from pod (busybox-6b86dd6d48-8hfr6): exit status 2
multinode_test.go:547: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-c2dqh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-899000 -- exec busybox-6b86dd6d48-c2dqh -- sh -c "ping -c 1 192.168.65.2"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-899000
helpers_test.go:235: (dbg) docker inspect multinode-899000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f",
	        "Created": "2023-02-23T20:57:05.198521017Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 92358,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T20:57:05.479780897Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:b74f629d1852fc20b6085123d98944654faddf1d7e642b41aa2866d7a48081ea",
	        "ResolvConfPath": "/var/lib/docker/containers/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f/hostname",
	        "HostsPath": "/var/lib/docker/containers/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f/hosts",
	        "LogPath": "/var/lib/docker/containers/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f/d420670bd4c5e00bc43aff3757784196522080617d7d827b9f9c41b5417ac51f-json.log",
	        "Name": "/multinode-899000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-899000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-899000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/96aefa811ebdc7e464dcb2fd2281efacc0961351917459ef4b73631abe415e23-init/diff:/var/lib/docker/overlay2/8ec2612a0ddcb8334b31fa2e2bc600c6d5b9a8c44165b2b56481359e67f82632/diff:/var/lib/docker/overlay2/5a4fcd864af35524d91e9f03f7a3ee889f13eb86bb854aeb6e62c3838280d5fc/diff:/var/lib/docker/overlay2/ca9e0d5e9bddb9a2d473c37bab2ac5f9f184126f5fb6e4c745f3be8914c03532/diff:/var/lib/docker/overlay2/619c31ca980751eda08bd35f1a83d95b3063245da47b494f158d072021494f4c/diff:/var/lib/docker/overlay2/7d620f2b5b85f7324d49fb2708fb7d4f1db9ff6b108d4ca3c6e3f6e8898b3ccc/diff:/var/lib/docker/overlay2/4ddfbadfca4c3e934e23063eb72f0a8b496f080e58fde7b65d0d73fac442087a/diff:/var/lib/docker/overlay2/27b7006de0c1a19fcc1c6121cd2f4e901780b83b732ce0880bc790e4d703cca6/diff:/var/lib/docker/overlay2/db9789081d8550dc6534127eb8db4d8c036eb99ed233cd3b179dcdd2148a8383/diff:/var/lib/docker/overlay2/78c4cb6843b7d55ed4487f84ff898a18bd4cf5b3ed008c952adc374157e890e2/diff:/var/lib/docker/overlay2/03a217
ffcc58371b47ca0920df99dd665be045c23519c8cf9abab2bdab1c5054/diff:/var/lib/docker/overlay2/011d725b17aadc4eb439b621974c407496cba93a833556a743d66552c707c1dc/diff:/var/lib/docker/overlay2/0b008f9fc314f9c01e518f7460862c8547f3d93385956a53f28f98fcd75dadd6/diff:/var/lib/docker/overlay2/356adf5e7cf2a827d25ddea32416e1a9e7d00b4b0adba15e70b4851516eaf000/diff:/var/lib/docker/overlay2/c9670a6f6981744d99152f0dbb1d59bf038363e715ac12f11e6ac3afec9650e4/diff:/var/lib/docker/overlay2/ab49bf4c3150a4da37f8525728f9da7e0aaded3fe8a24f903933eacd72f241da/diff:/var/lib/docker/overlay2/384753914be6edc5df597f20420a7b590d74a58e09b4f7eea9d19f5ccd3a971d/diff:/var/lib/docker/overlay2/a055650e8b909c9a2df13d514e5fcc459a3456dbcc9bc4597740578105e5f705/diff:/var/lib/docker/overlay2/985a888024d5ed2ee945bf037da4836977930ed967631a6e18255471a7b729c4/diff:/var/lib/docker/overlay2/591f52d09d50d8870b1601d17c65c0767b1d2e1db18e67a25b132b849fea51b2/diff:/var/lib/docker/overlay2/e64bda0fa456ba46eaadd53b798f3bb3a7fb3e3956685834382f9aa1e7c905f9/diff:/var/lib/d
ocker/overlay2/f698a91600258430cf3c97106cbb6ffbbba5818713bca72a2aba46cf92255e27/diff:/var/lib/docker/overlay2/1323dd726fea756f28381ac36970e1171e467b330f1d43ed15be5a82f7d8a892/diff:/var/lib/docker/overlay2/9607967e3631ebbf10a2e397fc287ae0fbbed8fc54f3bf39da1d050a410bb255/diff:/var/lib/docker/overlay2/e12a332b82c5db56dbc7e53aaa44c06434b071764e20d913001f71d97fadd232/diff:/var/lib/docker/overlay2/97a4d1655b4f47448f2f200a6b8f150e8f2960d0d6ff2b0920fd238d9fdc2c31/diff:/var/lib/docker/overlay2/15df85038e2f3436e3b23a6a35b84dcfaf3a735e506bc5af660c42519ede298b/diff:/var/lib/docker/overlay2/f29a318a8cfae29d19562dd7912e063084b1d321d8ea83b99f2808e363cec6bc/diff:/var/lib/docker/overlay2/73ecd3a5605dfc1ae938831bd261835b5bb3bf460857b84c0fbdb5ffcb290ea4/diff:/var/lib/docker/overlay2/949f2d40b73ae371ac4e7c81ef706a01da68e0a57145f13a3fb86c7eced257ef/diff:/var/lib/docker/overlay2/8d25550160c88d6c241f448420dd26daecce6bec8f774f2856a177a168ce3fe6/diff:/var/lib/docker/overlay2/27cbe8818217798c2761338718966cd435aaffff19e407bc5f20e21a831
c0172/diff:/var/lib/docker/overlay2/a8f41e83c2e19c1acaeb75ef0ef6daafe8f0c5675eb7a992ea4ad209f87b46b2/diff:/var/lib/docker/overlay2/4f127e69080651067a861bb1f9bbd08f2f57f6e05be509454e3e2a0cb0ecb178/diff:/var/lib/docker/overlay2/8bb03066bbd99667f78fb7ff8ed0939f8b06292372682c8f4a89d827588f18e6/diff:/var/lib/docker/overlay2/73261e58d3c16db540f287c0ddcdf6f3c4b9c869786e4e7a661931de7d55843e/diff:/var/lib/docker/overlay2/d48b7bafe3c2c5c869e17e7b043f3b4a5e5a13904f8fee77e9c429d43728fca9/diff:/var/lib/docker/overlay2/2e7b5043b64f757d5a308975d9ad9a451757a9fa450a726ce95e73347c79827a/diff:/var/lib/docker/overlay2/e8b366c628c74f57c66fd24385fa652cb7cfa81cec087f8ccec4ab98a6ae74d3/diff:/var/lib/docker/overlay2/3bb66a3fc586cafc4962828727dae244c9ee067ec0243f3f41f4e8fd1466ea80/diff:/var/lib/docker/overlay2/414633bd8851e03d3803cf3f8aa8c554a49cca39dff0d98db607dc81f318caea/diff:/var/lib/docker/overlay2/b2138b716615229ce59ff1ce8021afd5ca9d54aa64dfb7a928f137245788c9af/diff:/var/lib/docker/overlay2/51951ea2e125ce6991f056da1954df04375089
bd3c3897a92ee7e036a2a2e9ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/96aefa811ebdc7e464dcb2fd2281efacc0961351917459ef4b73631abe415e23/merged",
	                "UpperDir": "/var/lib/docker/overlay2/96aefa811ebdc7e464dcb2fd2281efacc0961351917459ef4b73631abe415e23/diff",
	                "WorkDir": "/var/lib/docker/overlay2/96aefa811ebdc7e464dcb2fd2281efacc0961351917459ef4b73631abe415e23/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-899000",
	                "Source": "/var/lib/docker/volumes/multinode-899000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-899000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-899000",
	                "name.minikube.sigs.k8s.io": "multinode-899000",
	                "org.opencontainers.image.ref.name": "ubuntu",
	                "org.opencontainers.image.version": "20.04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0893f49b2ebfbeed4d6531f12da7aa861b3f27403ce22ff5a3d269959ecb30a2",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51100"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51101"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51103"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "51104"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0893f49b2ebf",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-899000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "d420670bd4c5",
	                        "multinode-899000"
	                    ],
	                    "NetworkID": "74907d76fcbca3db0a3e224115a644eb0ad70a95bb2c54a24a34566f5665c6c8",
	                    "EndpointID": "92f617f7866f5839016390d26b6d715bd579262d63ac86cfca24748d985df14f",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-899000 -n multinode-899000
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-899000 logs -n 25: (2.599477536s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -p mount-start-2-367000                           | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | --memory=2048 --mount                             |                      |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize                       |                      |         |         |                     |                     |
	|         | 6543 --mount-port 46465                           |                      |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes                     |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| ssh     | mount-start-2-367000 ssh -- ls                    | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-1-354000                           | mount-start-1-354000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | --alsologtostderr -v=5                            |                      |         |         |                     |                     |
	| ssh     | mount-start-2-367000 ssh -- ls                    | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-367000                           | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| start   | -p mount-start-2-367000                           | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| ssh     | mount-start-2-367000 ssh -- ls                    | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-367000                           | mount-start-2-367000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| delete  | -p mount-start-1-354000                           | mount-start-1-354000 | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:56 PST |
	| start   | -p multinode-899000                               | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:56 PST | 23 Feb 23 12:58 PST |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- apply -f                   | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- rollout                    | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- get pods -o                | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- get pods -o                | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST |                     |
	|         | busybox-6b86dd6d48-8hfr6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-c2dqh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST |                     |
	|         | busybox-6b86dd6d48-8hfr6 --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-c2dqh --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST |                     |
	|         | busybox-6b86dd6d48-8hfr6 -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-c2dqh -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- get pods -o                | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-8hfr6                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST |                     |
	|         | busybox-6b86dd6d48-8hfr6 -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 <nil>                                |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-c2dqh                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-899000 -- exec                       | multinode-899000     | jenkins | v1.29.0 | 23 Feb 23 12:58 PST | 23 Feb 23 12:58 PST |
	|         | busybox-6b86dd6d48-c2dqh -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.65.2                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 12:56:57
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 12:56:57.258012    7621 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:56:57.258168    7621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:56:57.258173    7621 out.go:309] Setting ErrFile to fd 2...
	I0223 12:56:57.258177    7621 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:56:57.258290    7621 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 12:56:57.259624    7621 out.go:303] Setting JSON to false
	I0223 12:56:57.278075    7621 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1592,"bootTime":1677184225,"procs":387,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 12:56:57.278200    7621 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 12:56:57.299385    7621 out.go:177] * [multinode-899000] minikube v1.29.0 on Darwin 13.2
	I0223 12:56:57.341708    7621 notify.go:220] Checking for updates...
	I0223 12:56:57.363236    7621 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 12:56:57.384243    7621 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:56:57.405392    7621 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 12:56:57.426199    7621 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 12:56:57.447257    7621 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 12:56:57.468460    7621 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 12:56:57.489569    7621 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 12:56:57.551685    7621 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 12:56:57.551814    7621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:56:57.692723    7621 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 20:56:57.600524656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:56:57.736150    7621 out.go:177] * Using the docker driver based on user configuration
	I0223 12:56:57.757142    7621 start.go:296] selected driver: docker
	I0223 12:56:57.757169    7621 start.go:857] validating driver "docker" against <nil>
	I0223 12:56:57.757185    7621 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 12:56:57.761124    7621 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:56:57.902178    7621 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:false NGoroutines:51 SystemTime:2023-02-23 20:56:57.810277225 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:56:57.902283    7621 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 12:56:57.902491    7621 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 12:56:57.924266    7621 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 12:56:57.945893    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:56:57.945920    7621 cni.go:136] 0 nodes found, recommending kindnet
	I0223 12:56:57.945930    7621 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 12:56:57.945950    7621 start_flags.go:319] config:
	{Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:56:57.967764    7621 out.go:177] * Starting control plane node multinode-899000 in cluster multinode-899000
	I0223 12:56:57.989030    7621 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 12:56:58.010848    7621 out.go:177] * Pulling base image ...
	I0223 12:56:58.052993    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:56:58.053051    7621 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 12:56:58.053104    7621 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 12:56:58.053127    7621 cache.go:57] Caching tarball of preloaded images
	I0223 12:56:58.053350    7621 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 12:56:58.053369    7621 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 12:56:58.055755    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:56:58.055813    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json: {Name:mk6af36b0687a54554dd5acaa8f5c9b1d8730d32 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:56:58.109286    7621 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 12:56:58.109305    7621 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 12:56:58.109324    7621 cache.go:193] Successfully downloaded all kic artifacts
	I0223 12:56:58.109367    7621 start.go:364] acquiring machines lock for multinode-899000: {Name:mk988186d61e0f5195c5933755c16d9cd5d267e1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 12:56:58.109526    7621 start.go:368] acquired machines lock for "multinode-899000" in 147.42µs
	I0223 12:56:58.109558    7621 start.go:93] Provisioning new machine with config: &{Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 12:56:58.109633    7621 start.go:125] createHost starting for "" (driver="docker")
	I0223 12:56:58.131781    7621 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 12:56:58.132168    7621 start.go:159] libmachine.API.Create for "multinode-899000" (driver="docker")
	I0223 12:56:58.132217    7621 client.go:168] LocalClient.Create starting
	I0223 12:56:58.132396    7621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 12:56:58.132487    7621 main.go:141] libmachine: Decoding PEM data...
	I0223 12:56:58.132520    7621 main.go:141] libmachine: Parsing certificate...
	I0223 12:56:58.132642    7621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 12:56:58.132706    7621 main.go:141] libmachine: Decoding PEM data...
	I0223 12:56:58.132723    7621 main.go:141] libmachine: Parsing certificate...
	I0223 12:56:58.133586    7621 cli_runner.go:164] Run: docker network inspect multinode-899000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 12:56:58.187512    7621 cli_runner.go:211] docker network inspect multinode-899000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 12:56:58.187618    7621 network_create.go:281] running [docker network inspect multinode-899000] to gather additional debugging logs...
	I0223 12:56:58.187637    7621 cli_runner.go:164] Run: docker network inspect multinode-899000
	W0223 12:56:58.240550    7621 cli_runner.go:211] docker network inspect multinode-899000 returned with exit code 1
	I0223 12:56:58.240580    7621 network_create.go:284] error running [docker network inspect multinode-899000]: docker network inspect multinode-899000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: multinode-899000
	I0223 12:56:58.240592    7621 network_create.go:286] output of [docker network inspect multinode-899000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: multinode-899000
	
	** /stderr **
	I0223 12:56:58.240685    7621 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 12:56:58.295025    7621 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 12:56:58.295363    7621 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e50320}
	I0223 12:56:58.295376    7621 network_create.go:123] attempt to create docker network multinode-899000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 12:56:58.295453    7621 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-899000 multinode-899000
	I0223 12:56:58.380688    7621 network_create.go:107] docker network multinode-899000 192.168.58.0/24 created
	I0223 12:56:58.380730    7621 kic.go:117] calculated static IP "192.168.58.2" for the "multinode-899000" container
	I0223 12:56:58.380855    7621 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 12:56:58.435590    7621 cli_runner.go:164] Run: docker volume create multinode-899000 --label name.minikube.sigs.k8s.io=multinode-899000 --label created_by.minikube.sigs.k8s.io=true
	I0223 12:56:58.489652    7621 oci.go:103] Successfully created a docker volume multinode-899000
	I0223 12:56:58.489796    7621 cli_runner.go:164] Run: docker run --rm --name multinode-899000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-899000 --entrypoint /usr/bin/test -v multinode-899000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 12:56:58.925960    7621 oci.go:107] Successfully prepared a docker volume multinode-899000
	I0223 12:56:58.926003    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:56:58.926018    7621 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 12:56:58.926123    7621 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-899000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 12:57:05.001316    7621 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-899000:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.074998075s)
	I0223 12:57:05.001336    7621 kic.go:199] duration metric: took 6.075208 seconds to extract preloaded images to volume
	I0223 12:57:05.001454    7621 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 12:57:05.144571    7621 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-899000 --name multinode-899000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-899000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-899000 --network multinode-899000 --ip 192.168.58.2 --volume multinode-899000:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 12:57:05.488917    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Running}}
	I0223 12:57:05.549239    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:05.607583    7621 cli_runner.go:164] Run: docker exec multinode-899000 stat /var/lib/dpkg/alternatives/iptables
	I0223 12:57:05.721407    7621 oci.go:144] the created container "multinode-899000" has a running status.
	I0223 12:57:05.721438    7621 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa...
	I0223 12:57:05.882882    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 12:57:05.882954    7621 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 12:57:05.987239    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:06.045505    7621 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 12:57:06.045526    7621 kic_runner.go:114] Args: [docker exec --privileged multinode-899000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 12:57:06.149979    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:06.206338    7621 machine.go:88] provisioning docker machine ...
	I0223 12:57:06.206382    7621 ubuntu.go:169] provisioning hostname "multinode-899000"
	I0223 12:57:06.206467    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:06.263261    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:06.263642    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:06.263655    7621 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899000 && echo "multinode-899000" | sudo tee /etc/hostname
	I0223 12:57:06.407667    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899000
	
	I0223 12:57:06.407755    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:06.466148    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:06.466522    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:06.466535    7621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 12:57:06.601336    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 12:57:06.601357    7621 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-825/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-825/.minikube}
	I0223 12:57:06.601376    7621 ubuntu.go:177] setting up certificates
	I0223 12:57:06.601384    7621 provision.go:83] configureAuth start
	I0223 12:57:06.601457    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000
	I0223 12:57:06.657696    7621 provision.go:138] copyHostCerts
	I0223 12:57:06.657743    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:57:06.657799    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem, removing ...
	I0223 12:57:06.657806    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:57:06.657904    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem (1078 bytes)
	I0223 12:57:06.658084    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:57:06.658117    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem, removing ...
	I0223 12:57:06.658122    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:57:06.658187    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem (1123 bytes)
	I0223 12:57:06.658315    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:57:06.658350    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem, removing ...
	I0223 12:57:06.658354    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:57:06.658415    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem (1675 bytes)
	I0223 12:57:06.658543    7621 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem org=jenkins.multinode-899000 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-899000]
	I0223 12:57:06.714310    7621 provision.go:172] copyRemoteCerts
	I0223 12:57:06.714361    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 12:57:06.714408    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:06.770366    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:06.865632    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 12:57:06.865730    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 12:57:06.882924    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 12:57:06.883003    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0223 12:57:06.899760    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 12:57:06.899840    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 12:57:06.916716    7621 provision.go:86] duration metric: configureAuth took 315.312867ms
	I0223 12:57:06.916731    7621 ubuntu.go:193] setting minikube options for container-runtime
	I0223 12:57:06.916881    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:57:06.916949    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:06.973133    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:06.973483    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:06.973497    7621 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 12:57:07.105173    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 12:57:07.105200    7621 ubuntu.go:71] root file system type: overlay
	I0223 12:57:07.105292    7621 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 12:57:07.105391    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:07.161583    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:07.161943    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:07.161992    7621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 12:57:07.305650    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 12:57:07.305759    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:07.362739    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:57:07.363094    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51100 <nil> <nil>}
	I0223 12:57:07.363107    7621 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 12:57:07.968160    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 20:57:07.303656908 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 12:57:07.968185    7621 machine.go:91] provisioned docker machine in 1.76179481s
	I0223 12:57:07.968191    7621 client.go:171] LocalClient.Create took 9.835787482s
	I0223 12:57:07.968223    7621 start.go:167] duration metric: libmachine.API.Create for "multinode-899000" took 9.835875729s
	I0223 12:57:07.968237    7621 start.go:300] post-start starting for "multinode-899000" (driver="docker")
	I0223 12:57:07.968242    7621 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 12:57:07.968317    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 12:57:07.968377    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:08.025072    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:08.119943    7621 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 12:57:08.123557    7621 command_runner.go:130] > NAME="Ubuntu"
	I0223 12:57:08.123568    7621 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 12:57:08.123572    7621 command_runner.go:130] > ID=ubuntu
	I0223 12:57:08.123586    7621 command_runner.go:130] > ID_LIKE=debian
	I0223 12:57:08.123592    7621 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 12:57:08.123596    7621 command_runner.go:130] > VERSION_ID="20.04"
	I0223 12:57:08.123603    7621 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 12:57:08.123608    7621 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 12:57:08.123612    7621 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 12:57:08.123622    7621 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 12:57:08.123626    7621 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 12:57:08.123630    7621 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 12:57:08.123685    7621 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 12:57:08.123697    7621 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 12:57:08.123704    7621 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 12:57:08.123710    7621 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 12:57:08.123720    7621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/addons for local assets ...
	I0223 12:57:08.123819    7621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/files for local assets ...
	I0223 12:57:08.123992    7621 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> 20572.pem in /etc/ssl/certs
	I0223 12:57:08.123999    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /etc/ssl/certs/20572.pem
	I0223 12:57:08.124181    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 12:57:08.131664    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:57:08.148562    7621 start.go:303] post-start completed in 180.312213ms
	I0223 12:57:08.149075    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000
	I0223 12:57:08.205082    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:57:08.205495    7621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 12:57:08.205549    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:08.261424    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:08.354879    7621 command_runner.go:130] > 5%!
	(MISSING)I0223 12:57:08.354999    7621 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 12:57:08.359252    7621 command_runner.go:130] > 100G
	I0223 12:57:08.359532    7621 start.go:128] duration metric: createHost completed in 10.249707203s
	I0223 12:57:08.359567    7621 start.go:83] releasing machines lock for "multinode-899000", held for 10.249845411s
	I0223 12:57:08.359668    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000
	I0223 12:57:08.415149    7621 ssh_runner.go:195] Run: cat /version.json
	I0223 12:57:08.415178    7621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 12:57:08.415220    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:08.415244    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:08.474888    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:08.475075    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:08.624757    7621 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 12:57:08.626732    7621 command_runner.go:130] > {"iso_version": "v1.29.0-1676397967-15752", "kicbase_version": "v0.0.37-1676506612-15768", "minikube_version": "v1.29.0", "commit": "1ecebb4330bc6283999d4ca9b3c62a9eeee8c692"}
	I0223 12:57:08.626894    7621 ssh_runner.go:195] Run: systemctl --version
	I0223 12:57:08.631474    7621 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.19)
	I0223 12:57:08.631497    7621 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I0223 12:57:08.631585    7621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 12:57:08.636472    7621 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 12:57:08.636484    7621 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 12:57:08.636489    7621 command_runner.go:130] > Device: a6h/166d	Inode: 2229761     Links: 1
	I0223 12:57:08.636494    7621 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 12:57:08.636501    7621 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 12:57:08.636505    7621 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 12:57:08.636509    7621 command_runner.go:130] > Change: 2023-02-23 20:33:52.692471760 +0000
	I0223 12:57:08.636513    7621 command_runner.go:130] >  Birth: -
	I0223 12:57:08.636573    7621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 12:57:08.656127    7621 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 12:57:08.656200    7621 ssh_runner.go:195] Run: which cri-dockerd
	I0223 12:57:08.659899    7621 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 12:57:08.660110    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 12:57:08.667420    7621 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 12:57:08.679972    7621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 12:57:08.694467    7621 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 12:57:08.694510    7621 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 12:57:08.694522    7621 start.go:485] detecting cgroup driver to use...
	I0223 12:57:08.694533    7621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:57:08.694603    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:57:08.706589    7621 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 12:57:08.706601    7621 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 12:57:08.707395    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 12:57:08.715692    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 12:57:08.724063    7621 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 12:57:08.724120    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 12:57:08.732428    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:57:08.740746    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 12:57:08.749220    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:57:08.757697    7621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 12:57:08.765528    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 12:57:08.773699    7621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 12:57:08.780158    7621 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 12:57:08.780789    7621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 12:57:08.787632    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:57:08.855416    7621 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 12:57:08.931306    7621 start.go:485] detecting cgroup driver to use...
	I0223 12:57:08.931326    7621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:57:08.931389    7621 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 12:57:08.940671    7621 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 12:57:08.940905    7621 command_runner.go:130] > [Unit]
	I0223 12:57:08.940913    7621 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 12:57:08.940918    7621 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 12:57:08.940922    7621 command_runner.go:130] > BindsTo=containerd.service
	I0223 12:57:08.940927    7621 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 12:57:08.940931    7621 command_runner.go:130] > Wants=network-online.target
	I0223 12:57:08.940939    7621 command_runner.go:130] > Requires=docker.socket
	I0223 12:57:08.940943    7621 command_runner.go:130] > StartLimitBurst=3
	I0223 12:57:08.940947    7621 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 12:57:08.940950    7621 command_runner.go:130] > [Service]
	I0223 12:57:08.940955    7621 command_runner.go:130] > Type=notify
	I0223 12:57:08.940958    7621 command_runner.go:130] > Restart=on-failure
	I0223 12:57:08.940964    7621 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 12:57:08.940972    7621 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 12:57:08.940979    7621 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 12:57:08.940986    7621 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 12:57:08.940995    7621 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 12:57:08.941009    7621 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 12:57:08.941017    7621 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 12:57:08.941029    7621 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 12:57:08.941039    7621 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 12:57:08.941042    7621 command_runner.go:130] > ExecStart=
	I0223 12:57:08.941055    7621 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 12:57:08.941060    7621 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 12:57:08.941065    7621 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 12:57:08.941071    7621 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 12:57:08.941076    7621 command_runner.go:130] > LimitNOFILE=infinity
	I0223 12:57:08.941079    7621 command_runner.go:130] > LimitNPROC=infinity
	I0223 12:57:08.941087    7621 command_runner.go:130] > LimitCORE=infinity
	I0223 12:57:08.941092    7621 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 12:57:08.941096    7621 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 12:57:08.941101    7621 command_runner.go:130] > TasksMax=infinity
	I0223 12:57:08.941104    7621 command_runner.go:130] > TimeoutStartSec=0
	I0223 12:57:08.941111    7621 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 12:57:08.941115    7621 command_runner.go:130] > Delegate=yes
	I0223 12:57:08.941120    7621 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 12:57:08.941123    7621 command_runner.go:130] > KillMode=process
	I0223 12:57:08.941130    7621 command_runner.go:130] > [Install]
	I0223 12:57:08.941134    7621 command_runner.go:130] > WantedBy=multi-user.target
	I0223 12:57:08.941612    7621 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 12:57:08.941679    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 12:57:08.951806    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:57:08.965460    7621 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 12:57:08.965473    7621 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 12:57:08.966196    7621 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 12:57:09.061007    7621 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 12:57:09.148581    7621 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 12:57:09.148598    7621 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 12:57:09.161847    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:57:09.249710    7621 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 12:57:09.464408    7621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 12:57:09.531667    7621 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 12:57:09.531735    7621 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 12:57:09.595616    7621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 12:57:09.663989    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:57:09.732666    7621 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 12:57:09.752610    7621 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 12:57:09.752690    7621 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 12:57:09.756712    7621 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 12:57:09.756723    7621 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 12:57:09.756728    7621 command_runner.go:130] > Device: aeh/174d	Inode: 206         Links: 1
	I0223 12:57:09.756743    7621 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 12:57:09.756749    7621 command_runner.go:130] > Access: 2023-02-23 20:57:09.740656885 +0000
	I0223 12:57:09.756754    7621 command_runner.go:130] > Modify: 2023-02-23 20:57:09.740656885 +0000
	I0223 12:57:09.756759    7621 command_runner.go:130] > Change: 2023-02-23 20:57:09.749656885 +0000
	I0223 12:57:09.756762    7621 command_runner.go:130] >  Birth: -
	I0223 12:57:09.756782    7621 start.go:553] Will wait 60s for crictl version
	I0223 12:57:09.756820    7621 ssh_runner.go:195] Run: which crictl
	I0223 12:57:09.760384    7621 command_runner.go:130] > /usr/bin/crictl
	I0223 12:57:09.760508    7621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 12:57:09.851511    7621 command_runner.go:130] > Version:  0.1.0
	I0223 12:57:09.851524    7621 command_runner.go:130] > RuntimeName:  docker
	I0223 12:57:09.851528    7621 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 12:57:09.851532    7621 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 12:57:09.853615    7621 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 12:57:09.853687    7621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:57:09.876935    7621 command_runner.go:130] > 23.0.1
	I0223 12:57:09.878452    7621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:57:09.900755    7621 command_runner.go:130] > 23.0.1
	I0223 12:57:09.948673    7621 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 12:57:09.948832    7621 cli_runner.go:164] Run: docker exec -t multinode-899000 dig +short host.docker.internal
	I0223 12:57:10.057868    7621 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 12:57:10.057985    7621 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 12:57:10.062491    7621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:57:10.072388    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:10.130751    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:57:10.130843    7621 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 12:57:10.148841    7621 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 12:57:10.148854    7621 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 12:57:10.148859    7621 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 12:57:10.148866    7621 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 12:57:10.148871    7621 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 12:57:10.148874    7621 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 12:57:10.148879    7621 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 12:57:10.148888    7621 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 12:57:10.150542    7621 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 12:57:10.150557    7621 docker.go:560] Images already preloaded, skipping extraction
	I0223 12:57:10.150644    7621 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0223 12:57:10.169266    7621 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.26.1
	I0223 12:57:10.169279    7621 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.26.1
	I0223 12:57:10.169283    7621 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 12:57:10.169291    7621 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.26.1
	I0223 12:57:10.169297    7621 command_runner.go:130] > registry.k8s.io/etcd:3.5.6-0
	I0223 12:57:10.169302    7621 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0223 12:57:10.169307    7621 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I0223 12:57:10.169321    7621 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 12:57:10.170844    7621 docker.go:630] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.26.1
	registry.k8s.io/kube-scheduler:v1.26.1
	registry.k8s.io/kube-controller-manager:v1.26.1
	registry.k8s.io/kube-proxy:v1.26.1
	registry.k8s.io/etcd:3.5.6-0
	registry.k8s.io/pause:3.9
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0223 12:57:10.170854    7621 cache_images.go:84] Images are preloaded, skipping loading
	I0223 12:57:10.170947    7621 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 12:57:10.194294    7621 command_runner.go:130] > cgroupfs
	I0223 12:57:10.195997    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:57:10.196009    7621 cni.go:136] 1 nodes found, recommending kindnet
	I0223 12:57:10.196026    7621 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 12:57:10.196044    7621 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899000 NodeName:multinode-899000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 12:57:10.196163    7621 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-899000"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 12:57:10.196244    7621 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-899000 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 12:57:10.196318    7621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 12:57:10.203463    7621 command_runner.go:130] > kubeadm
	I0223 12:57:10.203471    7621 command_runner.go:130] > kubectl
	I0223 12:57:10.203475    7621 command_runner.go:130] > kubelet
	I0223 12:57:10.204064    7621 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 12:57:10.204120    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 12:57:10.211323    7621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (448 bytes)
	I0223 12:57:10.223788    7621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 12:57:10.236358    7621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2092 bytes)
	I0223 12:57:10.248986    7621 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 12:57:10.252904    7621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:57:10.262635    7621 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000 for IP: 192.168.58.2
	I0223 12:57:10.262654    7621 certs.go:186] acquiring lock for shared ca certs: {Name:mk9b7a98958f4333f06cfa6d87963d4d7f2b94cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.262839    7621 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key
	I0223 12:57:10.262905    7621 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key
	I0223 12:57:10.262951    7621 certs.go:315] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key
	I0223 12:57:10.262964    7621 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt with IP's: []
	I0223 12:57:10.322657    7621 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt ...
	I0223 12:57:10.322666    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt: {Name:mk230eb0789e348d7769aaa30562130e292016de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.322950    7621 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key ...
	I0223 12:57:10.322957    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key: {Name:mk81e6b74a5e9dc1cb3968aba8a3f96d82eec2bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.323154    7621 certs.go:315] generating minikube signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key.cee25041
	I0223 12:57:10.323173    7621 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0223 12:57:10.396692    7621 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt.cee25041 ...
	I0223 12:57:10.396700    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt.cee25041: {Name:mk8f8abf41e20371cfca65b1f7d3d17c53f40fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.396905    7621 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key.cee25041 ...
	I0223 12:57:10.396914    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key.cee25041: {Name:mk324908247e1988bfa2dea311b4e9ad6bbd9ae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.397093    7621 certs.go:333] copying /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt.cee25041 -> /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt
	I0223 12:57:10.397249    7621 certs.go:337] copying /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key.cee25041 -> /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key
	I0223 12:57:10.397410    7621 certs.go:315] generating aggregator signed cert: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key
	I0223 12:57:10.397424    7621 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt with IP's: []
	I0223 12:57:10.612767    7621 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt ...
	I0223 12:57:10.612776    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt: {Name:mkdec6c6a484a2eaf518126dea9253068b149693 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.612992    7621 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key ...
	I0223 12:57:10.613000    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key: {Name:mk364443d14fe67da4fb43f9103d14289df59b0d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:10.613172    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0223 12:57:10.613200    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0223 12:57:10.613219    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0223 12:57:10.613238    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0223 12:57:10.613259    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 12:57:10.613277    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 12:57:10.613295    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 12:57:10.613320    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 12:57:10.613411    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem (1338 bytes)
	W0223 12:57:10.613457    7621 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057_empty.pem, impossibly tiny 0 bytes
	I0223 12:57:10.613467    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 12:57:10.613498    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem (1078 bytes)
	I0223 12:57:10.613531    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem (1123 bytes)
	I0223 12:57:10.613565    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem (1675 bytes)
	I0223 12:57:10.613634    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:57:10.613668    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem -> /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.613687    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.613708    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.614190    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 12:57:10.632844    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 12:57:10.649894    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 12:57:10.666693    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 12:57:10.683665    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 12:57:10.700539    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 12:57:10.717410    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 12:57:10.734188    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 12:57:10.751234    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem --> /usr/share/ca-certificates/2057.pem (1338 bytes)
	I0223 12:57:10.768201    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /usr/share/ca-certificates/20572.pem (1708 bytes)
	I0223 12:57:10.784911    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 12:57:10.801764    7621 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 12:57:10.814485    7621 ssh_runner.go:195] Run: openssl version
	I0223 12:57:10.819597    7621 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 12:57:10.819916    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20572.pem && ln -fs /usr/share/ca-certificates/20572.pem /etc/ssl/certs/20572.pem"
	I0223 12:57:10.827936    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.831680    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.832204    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.832306    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20572.pem
	I0223 12:57:10.837895    7621 command_runner.go:130] > 3ec20f2e
	I0223 12:57:10.838093    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20572.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 12:57:10.845936    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 12:57:10.853817    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.857662    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.857948    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.858019    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:57:10.863075    7621 command_runner.go:130] > b5213941
	I0223 12:57:10.863486    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 12:57:10.871544    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2057.pem && ln -fs /usr/share/ca-certificates/2057.pem /etc/ssl/certs/2057.pem"
	I0223 12:57:10.879340    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.883115    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.883178    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.883222    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2057.pem
	I0223 12:57:10.888180    7621 command_runner.go:130] > 51391683
	I0223 12:57:10.888436    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2057.pem /etc/ssl/certs/51391683.0"
	I0223 12:57:10.896145    7621 kubeadm.go:401] StartCluster: {Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:57:10.896244    7621 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0223 12:57:10.915902    7621 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 12:57:10.923723    7621 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0223 12:57:10.923735    7621 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0223 12:57:10.923740    7621 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0223 12:57:10.923802    7621 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 12:57:10.931232    7621 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I0223 12:57:10.931302    7621 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 12:57:10.939026    7621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0223 12:57:10.939042    7621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0223 12:57:10.939048    7621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0223 12:57:10.939055    7621 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 12:57:10.939078    7621 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 12:57:10.939099    7621 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0223 12:57:10.987638    7621 kubeadm.go:322] [init] Using Kubernetes version: v1.26.1
	I0223 12:57:10.987645    7621 command_runner.go:130] > [init] Using Kubernetes version: v1.26.1
	I0223 12:57:10.987688    7621 kubeadm.go:322] [preflight] Running pre-flight checks
	I0223 12:57:10.987699    7621 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 12:57:11.093412    7621 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 12:57:11.093422    7621 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0223 12:57:11.093496    7621 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 12:57:11.093503    7621 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0223 12:57:11.093587    7621 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 12:57:11.093599    7621 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0223 12:57:11.221589    7621 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 12:57:11.221639    7621 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0223 12:57:11.263803    7621 out.go:204]   - Generating certificates and keys ...
	I0223 12:57:11.263920    7621 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0223 12:57:11.263935    7621 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0223 12:57:11.264003    7621 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0223 12:57:11.264016    7621 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0223 12:57:11.408786    7621 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 12:57:11.408793    7621 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0223 12:57:11.471993    7621 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0223 12:57:11.472006    7621 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0223 12:57:11.554812    7621 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0223 12:57:11.554823    7621 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0223 12:57:11.641056    7621 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0223 12:57:11.641070    7621 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0223 12:57:11.707218    7621 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0223 12:57:11.707228    7621 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0223 12:57:11.707346    7621 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 12:57:11.707359    7621 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-899000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 12:57:11.795926    7621 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0223 12:57:11.795933    7621 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0223 12:57:11.796051    7621 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 12:57:11.796060    7621 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-899000] and IPs [192.168.58.2 127.0.0.1 ::1]
	I0223 12:57:11.931956    7621 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 12:57:11.931968    7621 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0223 12:57:12.191982    7621 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 12:57:12.192004    7621 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0223 12:57:12.255363    7621 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0223 12:57:12.255372    7621 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0223 12:57:12.255424    7621 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 12:57:12.255433    7621 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0223 12:57:12.469451    7621 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 12:57:12.469462    7621 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0223 12:57:12.702193    7621 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 12:57:12.702205    7621 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0223 12:57:12.921511    7621 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 12:57:12.921528    7621 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0223 12:57:12.968259    7621 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 12:57:12.968269    7621 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0223 12:57:12.978609    7621 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 12:57:12.978618    7621 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 12:57:12.979200    7621 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 12:57:12.979222    7621 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 12:57:12.979261    7621 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0223 12:57:12.979268    7621 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 12:57:13.054089    7621 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 12:57:13.054114    7621 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0223 12:57:13.075671    7621 out.go:204]   - Booting up control plane ...
	I0223 12:57:13.075775    7621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 12:57:13.075783    7621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0223 12:57:13.075865    7621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 12:57:13.075872    7621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0223 12:57:13.075931    7621 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 12:57:13.075945    7621 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0223 12:57:13.076052    7621 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 12:57:13.076059    7621 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0223 12:57:13.076171    7621 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 12:57:13.076175    7621 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0223 12:57:21.560293    7621 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.501670 seconds
	I0223 12:57:21.560317    7621 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.501670 seconds
	I0223 12:57:21.560447    7621 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 12:57:21.560459    7621 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0223 12:57:21.568305    7621 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 12:57:21.568320    7621 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0223 12:57:22.082460    7621 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0223 12:57:22.082473    7621 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0223 12:57:22.082624    7621 kubeadm.go:322] [mark-control-plane] Marking the node multinode-899000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 12:57:22.082634    7621 command_runner.go:130] > [mark-control-plane] Marking the node multinode-899000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0223 12:57:22.589892    7621 kubeadm.go:322] [bootstrap-token] Using token: ybgu28.y4z8wg7gwd9t6sqw
	I0223 12:57:22.589930    7621 command_runner.go:130] > [bootstrap-token] Using token: ybgu28.y4z8wg7gwd9t6sqw
	I0223 12:57:22.611477    7621 out.go:204]   - Configuring RBAC rules ...
	I0223 12:57:22.611583    7621 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 12:57:22.611589    7621 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0223 12:57:22.652250    7621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 12:57:22.652265    7621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0223 12:57:22.657025    7621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 12:57:22.657035    7621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0223 12:57:22.659055    7621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 12:57:22.659061    7621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0223 12:57:22.661421    7621 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 12:57:22.661434    7621 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0223 12:57:22.663356    7621 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 12:57:22.663366    7621 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0223 12:57:22.670969    7621 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 12:57:22.670982    7621 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0223 12:57:22.814755    7621 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0223 12:57:22.814772    7621 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0223 12:57:23.055839    7621 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0223 12:57:23.055856    7621 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0223 12:57:23.056216    7621 kubeadm.go:322] 
	I0223 12:57:23.056299    7621 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0223 12:57:23.056315    7621 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0223 12:57:23.056342    7621 kubeadm.go:322] 
	I0223 12:57:23.056407    7621 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0223 12:57:23.056416    7621 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0223 12:57:23.056420    7621 kubeadm.go:322] 
	I0223 12:57:23.056465    7621 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0223 12:57:23.056484    7621 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0223 12:57:23.056553    7621 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 12:57:23.056565    7621 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0223 12:57:23.056636    7621 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 12:57:23.056650    7621 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0223 12:57:23.056659    7621 kubeadm.go:322] 
	I0223 12:57:23.056790    7621 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0223 12:57:23.056797    7621 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0223 12:57:23.056806    7621 kubeadm.go:322] 
	I0223 12:57:23.056858    7621 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 12:57:23.056867    7621 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0223 12:57:23.056879    7621 kubeadm.go:322] 
	I0223 12:57:23.056964    7621 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0223 12:57:23.056973    7621 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0223 12:57:23.057024    7621 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 12:57:23.057029    7621 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0223 12:57:23.057072    7621 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 12:57:23.057077    7621 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0223 12:57:23.057082    7621 kubeadm.go:322] 
	I0223 12:57:23.057168    7621 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0223 12:57:23.057175    7621 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0223 12:57:23.057235    7621 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0223 12:57:23.057240    7621 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0223 12:57:23.057243    7621 kubeadm.go:322] 
	I0223 12:57:23.057331    7621 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ybgu28.y4z8wg7gwd9t6sqw \
	I0223 12:57:23.057340    7621 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token ybgu28.y4z8wg7gwd9t6sqw \
	I0223 12:57:23.057472    7621 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 \
	I0223 12:57:23.057480    7621 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 \
	I0223 12:57:23.057500    7621 kubeadm.go:322] 	--control-plane 
	I0223 12:57:23.057512    7621 command_runner.go:130] > 	--control-plane 
	I0223 12:57:23.057539    7621 kubeadm.go:322] 
	I0223 12:57:23.057609    7621 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0223 12:57:23.057619    7621 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0223 12:57:23.057639    7621 kubeadm.go:322] 
	I0223 12:57:23.057715    7621 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ybgu28.y4z8wg7gwd9t6sqw \
	I0223 12:57:23.057721    7621 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token ybgu28.y4z8wg7gwd9t6sqw \
	I0223 12:57:23.057859    7621 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 
	I0223 12:57:23.057871    7621 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 
	I0223 12:57:23.061202    7621 kubeadm.go:322] W0223 20:57:10.980367    1300 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 12:57:23.061234    7621 command_runner.go:130] ! W0223 20:57:10.980367    1300 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 12:57:23.061402    7621 kubeadm.go:322] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 12:57:23.061415    7621 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 12:57:23.061519    7621 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 12:57:23.061528    7621 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 12:57:23.061554    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:57:23.061565    7621 cni.go:136] 1 nodes found, recommending kindnet
	I0223 12:57:23.101101    7621 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0223 12:57:23.138066    7621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 12:57:23.143657    7621 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 12:57:23.143672    7621 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 12:57:23.143677    7621 command_runner.go:130] > Device: a6h/166d	Inode: 2102733     Links: 1
	I0223 12:57:23.143681    7621 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 12:57:23.143693    7621 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 12:57:23.143699    7621 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 12:57:23.143703    7621 command_runner.go:130] > Change: 2023-02-23 20:33:51.991471766 +0000
	I0223 12:57:23.143706    7621 command_runner.go:130] >  Birth: -
	I0223 12:57:23.143729    7621 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 12:57:23.143735    7621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 12:57:23.157035    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 12:57:23.749482    7621 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0223 12:57:23.753013    7621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0223 12:57:23.758972    7621 command_runner.go:130] > serviceaccount/kindnet created
	I0223 12:57:23.765872    7621 command_runner.go:130] > daemonset.apps/kindnet created
	I0223 12:57:23.771409    7621 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 12:57:23.771483    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:23.771497    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl label nodes minikube.k8s.io/version=v1.29.0 minikube.k8s.io/commit=7816f70daabe48630c945a757f21bf8d759fce7d minikube.k8s.io/name=multinode-899000 minikube.k8s.io/updated_at=2023_02_23T12_57_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:23.868977    7621 command_runner.go:130] > node/multinode-899000 labeled
	I0223 12:57:23.872482    7621 command_runner.go:130] > -16
	I0223 12:57:23.872515    7621 ops.go:34] apiserver oom_adj: -16
	I0223 12:57:23.872561    7621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0223 12:57:23.872663    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:23.937423    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:24.439650    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:24.504999    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:24.939688    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:25.004046    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:25.438090    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:25.501923    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:25.938865    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:26.001562    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:26.438455    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:26.503890    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:26.939517    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:27.004031    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:27.437863    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:27.500361    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:27.938317    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:28.002108    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:28.439927    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:28.504960    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:28.938536    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:29.002486    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:29.437806    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:29.502903    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:29.937739    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:30.002660    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:30.438090    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:30.502726    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:30.938104    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:31.000839    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:31.439249    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:31.503527    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:31.938436    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:31.999326    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:32.439988    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:32.504038    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:32.937939    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:33.003858    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:33.437817    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:33.501563    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:33.938475    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:34.002602    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:34.437957    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:34.501963    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:34.937861    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:35.000751    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:35.438256    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:35.502704    7621 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0223 12:57:35.937865    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0223 12:57:36.036087    7621 command_runner.go:130] > NAME      SECRETS   AGE
	I0223 12:57:36.036102    7621 command_runner.go:130] > default   0         1s
	I0223 12:57:36.039705    7621 kubeadm.go:1073] duration metric: took 12.268065786s to wait for elevateKubeSystemPrivileges.
	I0223 12:57:36.039722    7621 kubeadm.go:403] StartCluster complete in 25.14312823s
	I0223 12:57:36.039740    7621 settings.go:142] acquiring lock: {Name:mkbd8676df55bd54ade697ff92726c4299ba6b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:36.039832    7621 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:57:36.040283    7621 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/kubeconfig: {Name:mka45aca5add49860892d9e622eefcdfd6860a2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:57:36.040527    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 12:57:36.040554    7621 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 12:57:36.040618    7621 addons.go:65] Setting storage-provisioner=true in profile "multinode-899000"
	I0223 12:57:36.040623    7621 addons.go:65] Setting default-storageclass=true in profile "multinode-899000"
	I0223 12:57:36.040640    7621 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-899000"
	I0223 12:57:36.040641    7621 addons.go:227] Setting addon storage-provisioner=true in "multinode-899000"
	I0223 12:57:36.040676    7621 host.go:66] Checking if "multinode-899000" exists ...
	I0223 12:57:36.040682    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:57:36.040739    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:57:36.040916    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:36.041014    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:36.040998    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:57:36.044746    7621 cert_rotation.go:137] Starting client certificate rotation controller
	I0223 12:57:36.045004    7621 round_trippers.go:463] GET https://127.0.0.1:51104/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 12:57:36.045013    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.045024    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.045032    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.054436    7621 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0223 12:57:36.054461    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.054471    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.054479    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.054487    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.054495    7621 round_trippers.go:580]     Content-Length: 291
	I0223 12:57:36.054503    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.054511    7621 round_trippers.go:580]     Audit-Id: dd961c59-3599-402a-98cb-62fc19792a60
	I0223 12:57:36.054525    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.054558    7621 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"355","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 12:57:36.054961    7621 request.go:1171] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"355","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 12:57:36.055000    7621 round_trippers.go:463] PUT https://127.0.0.1:51104/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 12:57:36.055005    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.055012    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.055019    7621 round_trippers.go:473]     Content-Type: application/json
	I0223 12:57:36.055027    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.060700    7621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 12:57:36.060725    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.060734    7621 round_trippers.go:580]     Audit-Id: e48c32a4-1372-443f-b55c-7f94a1ae5b6b
	I0223 12:57:36.060743    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.060770    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.060802    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.060827    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.060842    7621 round_trippers.go:580]     Content-Length: 291
	I0223 12:57:36.060851    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.060880    7621 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"357","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0223 12:57:36.107363    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:57:36.107563    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:57:36.107813    7621 round_trippers.go:463] GET https://127.0.0.1:51104/apis/storage.k8s.io/v1/storageclasses
	I0223 12:57:36.107819    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.107826    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.107835    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.132569    7621 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 12:57:36.154288    7621 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 12:57:36.154302    7621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 12:57:36.154384    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:36.156665    7621 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0223 12:57:36.156685    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.156691    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.156698    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.156711    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.156719    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.156726    7621 round_trippers.go:580]     Content-Length: 109
	I0223 12:57:36.156732    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.156739    7621 round_trippers.go:580]     Audit-Id: 0f34f23d-8c22-47df-b655-26e2f7a8b4df
	I0223 12:57:36.156757    7621 request.go:1171] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"366"},"items":[]}
	I0223 12:57:36.156967    7621 addons.go:227] Setting addon default-storageclass=true in "multinode-899000"
	I0223 12:57:36.156992    7621 host.go:66] Checking if "multinode-899000" exists ...
	I0223 12:57:36.157338    7621 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:57:36.160767    7621 command_runner.go:130] > apiVersion: v1
	I0223 12:57:36.160791    7621 command_runner.go:130] > data:
	I0223 12:57:36.160795    7621 command_runner.go:130] >   Corefile: |
	I0223 12:57:36.160799    7621 command_runner.go:130] >     .:53 {
	I0223 12:57:36.160802    7621 command_runner.go:130] >         errors
	I0223 12:57:36.160806    7621 command_runner.go:130] >         health {
	I0223 12:57:36.160813    7621 command_runner.go:130] >            lameduck 5s
	I0223 12:57:36.160818    7621 command_runner.go:130] >         }
	I0223 12:57:36.160821    7621 command_runner.go:130] >         ready
	I0223 12:57:36.160828    7621 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0223 12:57:36.160833    7621 command_runner.go:130] >            pods insecure
	I0223 12:57:36.160839    7621 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0223 12:57:36.160845    7621 command_runner.go:130] >            ttl 30
	I0223 12:57:36.160850    7621 command_runner.go:130] >         }
	I0223 12:57:36.160853    7621 command_runner.go:130] >         prometheus :9153
	I0223 12:57:36.160865    7621 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0223 12:57:36.160874    7621 command_runner.go:130] >            max_concurrent 1000
	I0223 12:57:36.160878    7621 command_runner.go:130] >         }
	I0223 12:57:36.160883    7621 command_runner.go:130] >         cache 30
	I0223 12:57:36.160887    7621 command_runner.go:130] >         loop
	I0223 12:57:36.160890    7621 command_runner.go:130] >         reload
	I0223 12:57:36.160894    7621 command_runner.go:130] >         loadbalance
	I0223 12:57:36.160897    7621 command_runner.go:130] >     }
	I0223 12:57:36.160901    7621 command_runner.go:130] > kind: ConfigMap
	I0223 12:57:36.160904    7621 command_runner.go:130] > metadata:
	I0223 12:57:36.160910    7621 command_runner.go:130] >   creationTimestamp: "2023-02-23T20:57:22Z"
	I0223 12:57:36.160914    7621 command_runner.go:130] >   name: coredns
	I0223 12:57:36.160917    7621 command_runner.go:130] >   namespace: kube-system
	I0223 12:57:36.160921    7621 command_runner.go:130] >   resourceVersion: "235"
	I0223 12:57:36.160925    7621 command_runner.go:130] >   uid: 8f8b9516-d5e4-49cf-a150-eb5868d86ded
	I0223 12:57:36.161087    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.65.2 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.26.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0223 12:57:36.220070    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:36.220596    7621 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 12:57:36.220609    7621 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 12:57:36.220672    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:36.284715    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:57:36.344918    7621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 12:57:36.453000    7621 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.26.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 12:57:36.460379    7621 command_runner.go:130] > configmap/coredns replaced
	I0223 12:57:36.460408    7621 start.go:921] {"host.minikube.internal": 192.168.65.2} host record injected into CoreDNS's ConfigMap
	I0223 12:57:36.561205    7621 round_trippers.go:463] GET https://127.0.0.1:51104/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 12:57:36.561228    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.561235    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.561242    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.563946    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:36.563963    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.563970    7621 round_trippers.go:580]     Audit-Id: 3c26d50a-f7a7-499b-af55-cd9f8eb2d0ab
	I0223 12:57:36.563977    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.563984    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.563990    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.563994    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.564007    7621 round_trippers.go:580]     Content-Length: 291
	I0223 12:57:36.564026    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.564048    7621 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"366","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 12:57:36.564127    7621 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-899000" context rescaled to 1 replicas
	I0223 12:57:36.564155    7621 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 12:57:36.585743    7621 out.go:177] * Verifying Kubernetes components...
	I0223 12:57:36.627187    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:57:36.743198    7621 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0223 12:57:36.747083    7621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0223 12:57:36.753927    7621 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 12:57:36.759155    7621 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0223 12:57:36.772697    7621 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0223 12:57:36.836193    7621 command_runner.go:130] > pod/storage-provisioner created
	I0223 12:57:36.860851    7621 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0223 12:57:36.868076    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:57:36.907527    7621 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0223 12:57:36.980297    7621 addons.go:492] enable addons completed in 939.672556ms: enabled=[storage-provisioner default-storageclass]
	I0223 12:57:36.992459    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:57:36.992713    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:57:36.992965    7621 node_ready.go:35] waiting up to 6m0s for node "multinode-899000" to be "Ready" ...
	I0223 12:57:36.993015    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:36.993021    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.993027    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:36.993032    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.996802    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:36.996823    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:36.996832    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:36.996840    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:36.996848    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:36.996855    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:36.996863    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:36 GMT
	I0223 12:57:36.996869    7621 round_trippers.go:580]     Audit-Id: f0f9dd4c-b992-4485-9384-585881abd75e
	I0223 12:57:36.996967    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:36.997735    7621 node_ready.go:49] node "multinode-899000" has status "Ready":"True"
	I0223 12:57:36.997746    7621 node_ready.go:38] duration metric: took 4.76415ms waiting for node "multinode-899000" to be "Ready" ...
	I0223 12:57:36.997753    7621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 12:57:36.997805    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:36.997811    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:36.997817    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:36.997823    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.001898    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:37.001918    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.001927    7621 round_trippers.go:580]     Audit-Id: bdf8cacd-4af7-4bd3-ab48-8eddf650fd0b
	I0223 12:57:37.001934    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.001939    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.001944    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.001950    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.001962    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.003565    7621 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"380"},"items":[{"metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 60224 chars]
	I0223 12:57:37.006197    7621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-255qk" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:37.006244    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:37.006249    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:37.006256    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:37.006263    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.009384    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:37.009401    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.009408    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.009416    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.009423    7621 round_trippers.go:580]     Audit-Id: 3ebb44ee-ca50-4638-a97a-bb51bbce28d8
	I0223 12:57:37.009430    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.009439    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.009451    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.009555    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:37.009854    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:37.009864    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:37.009873    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:37.009883    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.012353    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:37.012374    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.012380    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.012385    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.012390    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.012395    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.012400    7621 round_trippers.go:580]     Audit-Id: f1d4595f-af15-4f73-a208-560768a68e81
	I0223 12:57:37.012405    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.012462    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:37.512791    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:37.512810    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:37.512818    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:37.512827    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.537589    7621 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0223 12:57:37.537607    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.537616    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.537622    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.537628    7621 round_trippers.go:580]     Audit-Id: 71bed69f-4164-46a2-a2a1-4bfd3fd2a2a6
	I0223 12:57:37.537635    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.537646    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.537654    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.538674    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:37.538989    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:37.538997    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:37.539003    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:37.539010    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:37.542803    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:37.542830    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:37.542846    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:37.542861    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:37.542870    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:37.542876    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:37 GMT
	I0223 12:57:37.542882    7621 round_trippers.go:580]     Audit-Id: 4386b824-d716-4d25-9790-c3de02e3fb0b
	I0223 12:57:37.542890    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:37.542966    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:38.012954    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:38.012979    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:38.013031    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:38.013044    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:38.016760    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:38.016772    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:38.016778    7621 round_trippers.go:580]     Audit-Id: b08cfd64-ecf8-464e-9731-287102cfe4f5
	I0223 12:57:38.016783    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:38.016788    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:38.016793    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:38.016804    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:38.016810    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:38 GMT
	I0223 12:57:38.016870    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:38.017143    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:38.017149    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:38.017154    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:38.017161    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:38.019116    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:38.019125    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:38.019130    7621 round_trippers.go:580]     Audit-Id: abd5f201-42fc-4fab-a3b2-3cb256e7eb56
	I0223 12:57:38.019134    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:38.019139    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:38.019144    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:38.019149    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:38.019154    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:38 GMT
	I0223 12:57:38.019223    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:38.514981    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:38.515002    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:38.515015    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:38.515026    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:38.519296    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:38.519310    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:38.519316    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:38.519321    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:38.519326    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:38.519331    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:38.519336    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:38 GMT
	I0223 12:57:38.519341    7621 round_trippers.go:580]     Audit-Id: faa8eea9-2343-46b5-b6f9-643cfd45a748
	I0223 12:57:38.519406    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:38.519732    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:38.519739    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:38.519747    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:38.519752    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:38.522065    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:38.522073    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:38.522079    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:38.522083    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:38.522090    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:38.522095    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:38.522101    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:38 GMT
	I0223 12:57:38.522106    7621 round_trippers.go:580]     Audit-Id: 4c6dce51-d02c-45bd-96bc-f1655baa6949
	I0223 12:57:38.522164    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:39.014867    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:39.014879    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:39.014888    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:39.014897    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:39.019376    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:39.019404    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:39.019419    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:39.019429    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:39.019435    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:39.019441    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:39.019450    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:39 GMT
	I0223 12:57:39.019466    7621 round_trippers.go:580]     Audit-Id: 80e796a7-324f-402c-aa5e-bc94e0810dd9
	I0223 12:57:39.019567    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"353","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 5810 chars]
	I0223 12:57:39.020002    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:39.020012    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:39.020027    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:39.020041    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:39.023871    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:39.023907    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:39.023929    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:39.023937    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:39.023942    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:39 GMT
	I0223 12:57:39.023948    7621 round_trippers.go:580]     Audit-Id: b6dec412-53d3-4433-973c-837c9d2426df
	I0223 12:57:39.023953    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:39.023958    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:39.024109    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:39.024309    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:39.514707    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:39.514723    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:39.514733    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:39.514739    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:39.535462    7621 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0223 12:57:39.535474    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:39.535480    7621 round_trippers.go:580]     Audit-Id: 6a1be3c6-3b6b-43db-8129-ddd2009aa8be
	I0223 12:57:39.535485    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:39.535495    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:39.535501    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:39.535506    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:39.535512    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:39 GMT
	I0223 12:57:39.535574    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:39.535844    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:39.535850    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:39.535856    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:39.535861    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:39.538413    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:39.538430    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:39.538437    7621 round_trippers.go:580]     Audit-Id: 345651d6-9c0d-4dde-9273-66959f725b05
	I0223 12:57:39.538450    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:39.538461    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:39.538466    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:39.538471    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:39.538476    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:39 GMT
	I0223 12:57:39.538540    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:40.014828    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:40.014859    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:40.014923    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:40.014931    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:40.018508    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:40.018530    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:40.018539    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:40 GMT
	I0223 12:57:40.018544    7621 round_trippers.go:580]     Audit-Id: 1dffcbc6-de4c-441d-be9d-bd9ad27ccd94
	I0223 12:57:40.018549    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:40.018553    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:40.018558    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:40.018562    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:40.018651    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:40.018961    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:40.018968    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:40.018976    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:40.018989    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:40.021421    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:40.021432    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:40.021438    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:40.021443    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:40.021451    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:40 GMT
	I0223 12:57:40.021458    7621 round_trippers.go:580]     Audit-Id: 7e8457cf-7a16-4655-8693-9aa47bce26ad
	I0223 12:57:40.021464    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:40.021469    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:40.021535    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:40.513070    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:40.513091    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:40.513103    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:40.513114    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:40.517234    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:40.517249    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:40.517255    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:40.517263    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:40.517268    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:40.517273    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:40.517280    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:40 GMT
	I0223 12:57:40.517286    7621 round_trippers.go:580]     Audit-Id: f96ed6fa-bcb4-4c8b-bc49-941245c06d9a
	I0223 12:57:40.517347    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:40.517628    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:40.517634    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:40.517640    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:40.517646    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:40.519538    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:40.519547    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:40.519553    7621 round_trippers.go:580]     Audit-Id: 6818c246-e938-400d-a1f3-6b547c0e2c14
	I0223 12:57:40.519558    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:40.519563    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:40.519568    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:40.519573    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:40.519578    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:40 GMT
	I0223 12:57:40.519636    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:41.014271    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:41.014343    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:41.014358    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:41.014368    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:41.018448    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:41.018461    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:41.018472    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:41.018479    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:41 GMT
	I0223 12:57:41.018486    7621 round_trippers.go:580]     Audit-Id: 88efea70-7042-4966-b01d-5c6b4eae4d29
	I0223 12:57:41.018493    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:41.018499    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:41.018505    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:41.018591    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:41.018848    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:41.018854    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:41.018859    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:41.018865    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:41.021219    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:41.021229    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:41.021235    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:41.021241    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:41.021246    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:41 GMT
	I0223 12:57:41.021251    7621 round_trippers.go:580]     Audit-Id: 01866206-3a6c-4fa5-93ca-f2b19b3ae405
	I0223 12:57:41.021257    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:41.021262    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:41.021325    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:41.515042    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:41.515063    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:41.515075    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:41.515091    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:41.519528    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:41.519539    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:41.519545    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:41.519550    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:41.519556    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:41.519563    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:41.519568    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:41 GMT
	I0223 12:57:41.519573    7621 round_trippers.go:580]     Audit-Id: 92a528a6-61be-4433-b9ce-aea2231336aa
	I0223 12:57:41.519778    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:41.520049    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:41.520057    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:41.520065    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:41.520073    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:41.522204    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:41.522214    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:41.522221    7621 round_trippers.go:580]     Audit-Id: a28d9c1d-7dd5-4e92-8e3b-5bc0f2c1495b
	I0223 12:57:41.522228    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:41.522236    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:41.522241    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:41.522246    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:41.522251    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:41 GMT
	I0223 12:57:41.522529    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:41.522715    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:42.015184    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:42.015204    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:42.015217    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:42.015227    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:42.019399    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:42.019416    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:42.019422    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:42.019427    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:42.019431    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:42.019437    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:42.019447    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:42 GMT
	I0223 12:57:42.019452    7621 round_trippers.go:580]     Audit-Id: c3c7d015-1b01-4fc7-9584-693d9efea2d0
	I0223 12:57:42.019513    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:42.019789    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:42.019795    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:42.019801    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:42.019807    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:42.022150    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:42.022160    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:42.022165    7621 round_trippers.go:580]     Audit-Id: 6cd5d782-1f60-4f2f-9ffc-b576d89df22f
	I0223 12:57:42.022170    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:42.022183    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:42.022189    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:42.022194    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:42.022199    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:42 GMT
	I0223 12:57:42.022276    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:42.512925    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:42.512939    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:42.512958    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:42.512964    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:42.515730    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:42.515743    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:42.515749    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:42.515755    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:42.515763    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:42 GMT
	I0223 12:57:42.515768    7621 round_trippers.go:580]     Audit-Id: 9889836c-50e1-473d-aba6-d9410a5c0316
	I0223 12:57:42.515773    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:42.515778    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:42.515846    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:42.516196    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:42.516203    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:42.516209    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:42.516215    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:42.518461    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:42.518473    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:42.518481    7621 round_trippers.go:580]     Audit-Id: 307c83df-61cf-4812-9aa9-4f95df344503
	I0223 12:57:42.518486    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:42.518492    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:42.518498    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:42.518502    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:42.518508    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:42 GMT
	I0223 12:57:42.518570    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:43.013393    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:43.013414    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:43.013426    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:43.013436    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:43.037715    7621 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0223 12:57:43.037739    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:43.037748    7621 round_trippers.go:580]     Audit-Id: 9601bb68-0ae3-473f-8600-a8b450d46691
	I0223 12:57:43.037755    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:43.037764    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:43.037774    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:43.037785    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:43.037795    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:43 GMT
	I0223 12:57:43.037974    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:43.038444    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:43.038461    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:43.038473    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:43.038485    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:43.041542    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:43.041564    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:43.041581    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:43.041591    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:43.041597    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:43.041608    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:43 GMT
	I0223 12:57:43.041622    7621 round_trippers.go:580]     Audit-Id: defe9c98-2a34-4afb-abaa-124d5238f440
	I0223 12:57:43.041635    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:43.041765    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:43.513808    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:43.513829    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:43.513841    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:43.513855    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:43.536676    7621 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0223 12:57:43.536699    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:43.536713    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:43.536723    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:43.536728    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:43 GMT
	I0223 12:57:43.536734    7621 round_trippers.go:580]     Audit-Id: 546c3cfe-00ca-4dae-8b95-ad8a9b3ad7ce
	I0223 12:57:43.536740    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:43.536746    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:43.536845    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:43.537151    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:43.537158    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:43.537164    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:43.537169    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:43.539378    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:43.539394    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:43.539403    7621 round_trippers.go:580]     Audit-Id: 9b39f22e-4c86-4f5c-b7da-d15f9ffa01b1
	I0223 12:57:43.539410    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:43.539416    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:43.539423    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:43.539429    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:43.539434    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:43 GMT
	I0223 12:57:43.539502    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"308","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:43.539712    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:44.015105    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:44.015125    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:44.015137    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:44.015147    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:44.037529    7621 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0223 12:57:44.037548    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:44.037556    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:44.037565    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:44.037575    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:44 GMT
	I0223 12:57:44.037584    7621 round_trippers.go:580]     Audit-Id: bfb6cc20-39cb-4a64-82a9-4b6dface125d
	I0223 12:57:44.037595    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:44.037605    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:44.037693    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:44.038083    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:44.038092    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:44.038114    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:44.038120    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:44.040623    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:44.040639    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:44.040648    7621 round_trippers.go:580]     Audit-Id: 90f2c301-3066-4c6e-9217-49a4baceaa01
	I0223 12:57:44.040662    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:44.040675    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:44.040687    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:44.040707    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:44.040716    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:44 GMT
	I0223 12:57:44.040901    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:44.513272    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:44.513291    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:44.513303    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:44.513313    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:44.536386    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:44.536426    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:44.536442    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:44.536453    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:44.536463    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:44 GMT
	I0223 12:57:44.536477    7621 round_trippers.go:580]     Audit-Id: c6ad94bd-fe85-4fce-ae78-356285bbd1b8
	I0223 12:57:44.536500    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:44.536520    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:44.537149    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:44.537566    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:44.537577    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:44.537586    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:44.537592    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:44.540011    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:44.540024    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:44.540030    7621 round_trippers.go:580]     Audit-Id: 68dd7b06-5f1d-4848-8a96-e8defec195b1
	I0223 12:57:44.540042    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:44.540047    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:44.540053    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:44.540060    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:44.540065    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:44 GMT
	I0223 12:57:44.540126    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:45.013158    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:45.013180    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:45.013195    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:45.013204    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:45.036554    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:45.036576    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:45.036587    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:45.036597    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:45.036607    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:45.036621    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:45 GMT
	I0223 12:57:45.036632    7621 round_trippers.go:580]     Audit-Id: cc4181fe-a211-48a3-b00a-c24bb22e4237
	I0223 12:57:45.036642    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:45.036753    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:45.037242    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:45.037253    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:45.037265    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:45.037279    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:45.040736    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:45.040751    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:45.040760    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:45.040769    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:45 GMT
	I0223 12:57:45.040780    7621 round_trippers.go:580]     Audit-Id: accab3ac-a9cf-427b-bcef-70e328e7bf0e
	I0223 12:57:45.040798    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:45.040806    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:45.040812    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:45.041491    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:45.513687    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:45.513707    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:45.513719    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:45.513729    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:45.537559    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:45.537577    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:45.537585    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:45.537594    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:45.537600    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:45.537609    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:45 GMT
	I0223 12:57:45.537618    7621 round_trippers.go:580]     Audit-Id: dec67177-e9d4-41ba-a87d-657c61ff2373
	I0223 12:57:45.537625    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:45.537712    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:45.538102    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:45.538108    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:45.538114    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:45.538119    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:45.540504    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:45.540515    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:45.540520    7621 round_trippers.go:580]     Audit-Id: a22286fa-6b13-446b-9664-b027bc6ce8c4
	I0223 12:57:45.540525    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:45.540530    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:45.540535    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:45.540543    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:45.540548    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:45 GMT
	I0223 12:57:45.540617    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:45.540835    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:46.014347    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:46.014368    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:46.014380    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:46.014390    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:46.036415    7621 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0223 12:57:46.036433    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:46.036440    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:46.036447    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:46 GMT
	I0223 12:57:46.036454    7621 round_trippers.go:580]     Audit-Id: 2b513dcc-ebe9-4a3e-9e2a-97c60684bd50
	I0223 12:57:46.036462    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:46.036473    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:46.036479    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:46.036563    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:46.036973    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:46.036980    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:46.036986    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:46.036993    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:46.039212    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:46.039225    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:46.039233    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:46.039241    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:46.039247    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:46 GMT
	I0223 12:57:46.039252    7621 round_trippers.go:580]     Audit-Id: 164ba9f0-7204-44a2-a8af-c21c95b04b54
	I0223 12:57:46.039258    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:46.039288    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:46.039383    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:46.513395    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:46.513416    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:46.513429    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:46.513439    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:46.538393    7621 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0223 12:57:46.538412    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:46.538421    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:46.538428    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:46.538434    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:46 GMT
	I0223 12:57:46.538441    7621 round_trippers.go:580]     Audit-Id: 323ad531-874d-430d-be12-ca0bebe1a142
	I0223 12:57:46.538447    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:46.538453    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:46.538545    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:46.538842    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:46.538852    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:46.538858    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:46.538863    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:46.541442    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:46.541455    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:46.541463    7621 round_trippers.go:580]     Audit-Id: e2220d08-7521-4b3a-b1eb-678b93056002
	I0223 12:57:46.541478    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:46.541485    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:46.541491    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:46.541499    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:46.541505    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:46 GMT
	I0223 12:57:46.542226    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:47.014109    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:47.014130    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:47.014142    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:47.014152    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:47.037909    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:47.037926    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:47.037935    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:47.037942    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:47.037949    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:47.037955    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:47 GMT
	I0223 12:57:47.037962    7621 round_trippers.go:580]     Audit-Id: 7da4ee61-126c-4e55-af6c-b92b3e58d1c3
	I0223 12:57:47.037969    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:47.038268    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:47.038588    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:47.038595    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:47.038603    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:47.038609    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:47.040988    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:47.041001    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:47.041010    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:47.041030    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:47.041041    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:47.041050    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:47.041058    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:47 GMT
	I0223 12:57:47.041065    7621 round_trippers.go:580]     Audit-Id: 0f28c6f2-a920-40ba-85e5-229b65615408
	I0223 12:57:47.041152    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:47.513505    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:47.513527    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:47.513540    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:47.513550    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:47.537667    7621 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0223 12:57:47.537698    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:47.537713    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:47.537724    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:47.537740    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:47.537757    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:47 GMT
	I0223 12:57:47.537779    7621 round_trippers.go:580]     Audit-Id: ef0e5e01-0cae-4caa-8cca-28e0b2d4abd1
	I0223 12:57:47.537799    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:47.537970    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:47.538355    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:47.538363    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:47.538369    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:47.538374    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:47.540864    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:47.540882    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:47.540890    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:47.540897    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:47.540908    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:47.540913    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:47 GMT
	I0223 12:57:47.540919    7621 round_trippers.go:580]     Audit-Id: 6cf7f39a-ccc9-4376-a891-1f64fabc16ac
	I0223 12:57:47.540926    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:47.541027    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:47.541237    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:48.015170    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:48.015192    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:48.015204    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:48.015214    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:48.037601    7621 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0223 12:57:48.037619    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:48.037627    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:48.037634    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:48.037640    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:48 GMT
	I0223 12:57:48.037647    7621 round_trippers.go:580]     Audit-Id: e024aab2-c046-4ded-a4a4-0c12e1c032c0
	I0223 12:57:48.037657    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:48.037666    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:48.037838    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:48.038126    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:48.038134    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:48.038140    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:48.038145    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:48.040555    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:48.040568    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:48.040575    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:48.040580    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:48 GMT
	I0223 12:57:48.040585    7621 round_trippers.go:580]     Audit-Id: 276b01ca-4c9c-4586-bff8-db22e37efb00
	I0223 12:57:48.040590    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:48.040594    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:48.040600    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:48.040688    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:48.513541    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:48.513563    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:48.513575    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:48.513586    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:48.540436    7621 round_trippers.go:574] Response Status: 200 OK in 26 milliseconds
	I0223 12:57:48.540451    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:48.540457    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:48.540462    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:48.540466    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:48 GMT
	I0223 12:57:48.540470    7621 round_trippers.go:580]     Audit-Id: 9b05377d-0ece-4bb2-85bb-0693dc44b384
	I0223 12:57:48.540475    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:48.540486    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:48.540548    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:48.540840    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:48.540846    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:48.540852    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:48.540857    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:48.543399    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:48.543415    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:48.543430    7621 round_trippers.go:580]     Audit-Id: 39da13d8-d350-4009-a0dd-6d82f548e31d
	I0223 12:57:48.543440    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:48.543446    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:48.543451    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:48.543456    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:48.543463    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:48 GMT
	I0223 12:57:48.543592    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:49.014115    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:49.014139    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:49.014152    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:49.014162    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:49.037964    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:49.037981    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:49.037989    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:49.037996    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:49.038002    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:49.038008    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:49.038015    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:49 GMT
	I0223 12:57:49.038021    7621 round_trippers.go:580]     Audit-Id: 31a008c1-b2f4-4487-922f-265333c2e818
	I0223 12:57:49.038097    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:49.038438    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:49.038444    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:49.038450    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:49.038456    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:49.040482    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:49.040491    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:49.040497    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:49 GMT
	I0223 12:57:49.040508    7621 round_trippers.go:580]     Audit-Id: 4901e121-c25c-420f-892b-51afebcef866
	I0223 12:57:49.040514    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:49.040518    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:49.040523    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:49.040528    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:49.040614    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:49.513919    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:49.513940    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:49.513952    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:49.513963    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:49.537507    7621 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0223 12:57:49.537528    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:49.537548    7621 round_trippers.go:580]     Audit-Id: e452cc89-ed81-48fc-a827-5e8b6f3b6a3f
	I0223 12:57:49.537556    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:49.537564    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:49.537574    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:49.537584    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:49.537591    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:49 GMT
	I0223 12:57:49.537700    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:49.538117    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:49.538125    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:49.538134    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:49.538142    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:49.540791    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:49.540803    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:49.540808    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:49.540815    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:49.540823    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:49.540829    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:49.540834    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:49 GMT
	I0223 12:57:49.540838    7621 round_trippers.go:580]     Audit-Id: 0838230f-6504-4de1-817c-0a4f57400a68
	I0223 12:57:49.540931    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:50.015144    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:50.015170    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:50.015183    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:50.015192    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:50.035960    7621 round_trippers.go:574] Response Status: 200 OK in 20 milliseconds
	I0223 12:57:50.035983    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:50.035995    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:50.036006    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:50.036017    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:50.036037    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:50.036054    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:50 GMT
	I0223 12:57:50.036069    7621 round_trippers.go:580]     Audit-Id: e94aef2b-8a67-46cd-8f8a-d7f092b594fa
	I0223 12:57:50.036289    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:50.036582    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:50.036588    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:50.036596    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:50.036603    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:50.038884    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:50.038895    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:50.038901    7621 round_trippers.go:580]     Audit-Id: 146a91ee-e1fe-48ed-ab36-cec6556ba195
	I0223 12:57:50.038906    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:50.038930    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:50.038936    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:50.038941    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:50.038945    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:50 GMT
	I0223 12:57:50.039010    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:50.039216    7621 pod_ready.go:102] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"False"
	I0223 12:57:50.515214    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:50.515234    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:50.515246    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:50.515256    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:50.537319    7621 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I0223 12:57:50.537343    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:50.537355    7621 round_trippers.go:580]     Audit-Id: f0efffdd-951d-4b86-8f99-639827385670
	I0223 12:57:50.537365    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:50.537375    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:50.537390    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:50.537410    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:50.537429    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:50 GMT
	I0223 12:57:50.537552    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:50.537940    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:50.537948    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:50.537955    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:50.537962    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:50.540450    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:50.540465    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:50.540473    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:50.540480    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:50 GMT
	I0223 12:57:50.540488    7621 round_trippers.go:580]     Audit-Id: ad262c15-82c2-4601-9b0b-71967af7b575
	I0223 12:57:50.540497    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:50.540503    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:50.540508    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:50.540568    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:51.013588    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:51.013600    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:51.013606    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:51.013611    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:51.016722    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:51.016738    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:51.016744    7621 round_trippers.go:580]     Audit-Id: cbc0669b-f725-4abc-8b8a-fdbd07e43e35
	I0223 12:57:51.016749    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:51.016785    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:51.016798    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:51.016803    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:51.016812    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:51 GMT
	I0223 12:57:51.016874    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:51.017188    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:51.017196    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:51.017204    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:51.017209    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:51.019344    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:51.019355    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:51.019363    7621 round_trippers.go:580]     Audit-Id: 14a89371-1a2d-4b0b-8348-2d1e93755a74
	I0223 12:57:51.019368    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:51.019373    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:51.019378    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:51.019383    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:51.019389    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:51 GMT
	I0223 12:57:51.019635    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:51.513267    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:51.513287    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:51.513300    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:51.513310    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:51.516778    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:51.516790    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:51.516795    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:51.516800    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:51.516804    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:51 GMT
	I0223 12:57:51.516810    7621 round_trippers.go:580]     Audit-Id: ac6934bf-d595-4319-8928-0e7a0b139ab5
	I0223 12:57:51.516815    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:51.516820    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:51.517403    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"403","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6039 chars]
	I0223 12:57:51.518013    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:51.518023    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:51.518034    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:51.518080    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:51.521081    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:51.521094    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:51.521100    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:51.521106    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:51.521110    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:51.521118    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:51 GMT
	I0223 12:57:51.521123    7621 round_trippers.go:580]     Audit-Id: bfe87b3d-e10e-4716-bb9e-e5ef01179e23
	I0223 12:57:51.521128    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:51.521185    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.013907    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:57:52.013928    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.013942    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.013952    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.017838    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:52.017863    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.017872    7621 round_trippers.go:580]     Audit-Id: 1d326f0a-8fe5-4e3a-a107-df17c6c0bfb6
	I0223 12:57:52.017879    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.017886    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.017892    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.017900    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.017910    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.017997    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 12:57:52.018322    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.018328    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.018333    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.018339    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.020347    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:52.020356    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.020362    7621 round_trippers.go:580]     Audit-Id: 50a34c55-8ff9-4cc3-8eeb-afd78b48cb80
	I0223 12:57:52.020367    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.020372    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.020377    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.020382    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.020387    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.020453    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.020632    7621 pod_ready.go:92] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.020643    7621 pod_ready.go:81] duration metric: took 15.014158981s waiting for pod "coredns-787d4945fb-255qk" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.020651    7621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-fllr8" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.020682    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-fllr8
	I0223 12:57:52.020687    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.020693    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.020701    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.022630    7621 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0223 12:57:52.022639    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.022645    7621 round_trippers.go:580]     Audit-Id: 31ac6d28-4f4e-4ca8-a999-9455654d0f8e
	I0223 12:57:52.022653    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.022659    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.022676    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.022684    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.022690    7621 round_trippers.go:580]     Content-Length: 216
	I0223 12:57:52.022696    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.022708    7621 request.go:1171] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods \"coredns-787d4945fb-fllr8\" not found","reason":"NotFound","details":{"name":"coredns-787d4945fb-fllr8","kind":"pods"},"code":404}
	I0223 12:57:52.022816    7621 pod_ready.go:97] error getting pod "coredns-787d4945fb-fllr8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-fllr8" not found
	I0223 12:57:52.022823    7621 pod_ready.go:81] duration metric: took 2.166161ms waiting for pod "coredns-787d4945fb-fllr8" in "kube-system" namespace to be "Ready" ...
	E0223 12:57:52.022829    7621 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-787d4945fb-fllr8" in "kube-system" namespace (skipping!): pods "coredns-787d4945fb-fllr8" not found
	I0223 12:57:52.022837    7621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.022861    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/etcd-multinode-899000
	I0223 12:57:52.022868    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.022873    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.022879    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.024862    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:52.024870    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.024876    7621 round_trippers.go:580]     Audit-Id: 79afbc66-7802-45ee-8b5d-182ed3438ac9
	I0223 12:57:52.024881    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.024886    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.024891    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.024896    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.024901    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.024946    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899000","namespace":"kube-system","uid":"04c36b20-3f1c-4967-be88-dfaf04e459fb","resourceVersion":"273","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"566ae0c6f1e5eb2cbf1380e3d7174fa3","kubernetes.io/config.mirror":"566ae0c6f1e5eb2cbf1380e3d7174fa3","kubernetes.io/config.seen":"2023-02-23T20:57:22.892805434Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 12:57:52.025159    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.025165    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.025171    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.025177    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.027383    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.027395    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.027400    7621 round_trippers.go:580]     Audit-Id: e5256895-7ce4-4a93-985a-983e6a92f71b
	I0223 12:57:52.027406    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.027411    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.027416    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.027420    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.027425    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.027505    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.027688    7621 pod_ready.go:92] pod "etcd-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.027695    7621 pod_ready.go:81] duration metric: took 4.853774ms waiting for pod "etcd-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.027702    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.027730    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899000
	I0223 12:57:52.027734    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.027739    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.027746    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.029635    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:52.029644    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.029649    7621 round_trippers.go:580]     Audit-Id: ec57d51b-2886-46e6-866c-2d3df1e4fe35
	I0223 12:57:52.029658    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.029664    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.029670    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.029674    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.029680    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.029742    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899000","namespace":"kube-system","uid":"8f2e9b4f-7407-4a4f-86d7-cbaa54f4982b","resourceVersion":"275","creationTimestamp":"2023-02-23T20:57:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"04b8445a9cf4f56fec75b4c565d27f23","kubernetes.io/config.mirror":"04b8445a9cf4f56fec75b4c565d27f23","kubernetes.io/config.seen":"2023-02-23T20:57:13.277278836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 12:57:52.030018    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.030024    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.030030    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.030035    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.032046    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.032056    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.032065    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.032070    7621 round_trippers.go:580]     Audit-Id: c324aed6-1792-4b08-ad2b-d70633205de5
	I0223 12:57:52.032075    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.032080    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.032087    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.032092    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.032136    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.032302    7621 pod_ready.go:92] pod "kube-apiserver-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.032307    7621 pod_ready.go:81] duration metric: took 4.599631ms waiting for pod "kube-apiserver-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.032313    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.032339    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899000
	I0223 12:57:52.032343    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.032350    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.032358    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.034377    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.034388    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.034396    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.034402    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.034407    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.034413    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.034419    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.034424    7621 round_trippers.go:580]     Audit-Id: 980ae714-349b-400d-b826-3c0178a86978
	I0223 12:57:52.034493    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899000","namespace":"kube-system","uid":"8a9821eb-106e-43fb-919d-59f0d6132887","resourceVersion":"301","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02827c95207bba4f962be58bf081b453","kubernetes.io/config.mirror":"02827c95207bba4f962be58bf081b453","kubernetes.io/config.seen":"2023-02-23T20:57:22.892794347Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 12:57:52.034741    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.034747    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.034753    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.034758    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.036930    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.036938    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.036944    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.036948    7621 round_trippers.go:580]     Audit-Id: 5d07398b-1852-40c0-a5b8-d2040ed95ffa
	I0223 12:57:52.036954    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.036958    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.036964    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.036969    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.037035    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.037206    7621 pod_ready.go:92] pod "kube-controller-manager-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.037212    7621 pod_ready.go:81] duration metric: took 4.8941ms waiting for pod "kube-controller-manager-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.037219    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w885m" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.037248    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-w885m
	I0223 12:57:52.037252    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.037258    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.037264    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.039374    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.039383    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.039389    7621 round_trippers.go:580]     Audit-Id: 9147538d-969a-4301-ad43-999a043f8b58
	I0223 12:57:52.039394    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.039400    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.039408    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.039414    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.039419    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.039475    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w885m","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e1284e2-dcb3-408c-bc90-a501107f7e23","resourceVersion":"397","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 12:57:52.214086    7621 request.go:622] Waited for 174.278826ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.214143    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.214153    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.214171    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.214182    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.217595    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:52.217611    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.217616    7621 round_trippers.go:580]     Audit-Id: 1196ae99-b719-4d9b-b625-d61fdd5b8668
	I0223 12:57:52.217622    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.217627    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.217632    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.217637    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.217645    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.217712    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.217943    7621 pod_ready.go:92] pod "kube-proxy-w885m" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.217957    7621 pod_ready.go:81] duration metric: took 180.729704ms waiting for pod "kube-proxy-w885m" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.217963    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.413876    7621 request.go:622] Waited for 195.871547ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899000
	I0223 12:57:52.413947    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899000
	I0223 12:57:52.413952    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.413959    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.413965    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.416833    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:57:52.416843    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.416849    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.416854    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.416859    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.416864    7621 round_trippers.go:580]     Audit-Id: 272fc67b-d140-4985-b548-c85b1ce81f03
	I0223 12:57:52.416870    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.416874    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.416948    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899000","namespace":"kube-system","uid":"b864a38e-68d2-4949-92a9-0f736cbdf7fe","resourceVersion":"296","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bad6109cbec6cd514239122749558677","kubernetes.io/config.mirror":"bad6109cbec6cd514239122749558677","kubernetes.io/config.seen":"2023-02-23T20:57:22.892804438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 12:57:52.613918    7621 request.go:622] Waited for 196.719938ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.613981    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:57:52.613993    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.614005    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.614016    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.617690    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:52.617702    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.617708    7621 round_trippers.go:580]     Audit-Id: b3a6a5d0-4a80-46b3-a54f-53e427bd43b5
	I0223 12:57:52.617713    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.617718    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.617723    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.617728    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.617733    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.617796    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 4954 chars]
	I0223 12:57:52.617982    7621 pod_ready.go:92] pod "kube-scheduler-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:57:52.617988    7621 pod_ready.go:81] duration metric: took 400.011941ms waiting for pod "kube-scheduler-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:57:52.617994    7621 pod_ready.go:38] duration metric: took 15.619951169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 12:57:52.618009    7621 api_server.go:51] waiting for apiserver process to appear ...
	I0223 12:57:52.618069    7621 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 12:57:52.627311    7621 command_runner.go:130] > 1885
	I0223 12:57:52.627987    7621 api_server.go:71] duration metric: took 16.063512433s to wait for apiserver process to appear ...
	I0223 12:57:52.628000    7621 api_server.go:87] waiting for apiserver healthz status ...
	I0223 12:57:52.628011    7621 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51104/healthz ...
	I0223 12:57:52.634019    7621 api_server.go:278] https://127.0.0.1:51104/healthz returned 200:
	ok
	I0223 12:57:52.634053    7621 round_trippers.go:463] GET https://127.0.0.1:51104/version
	I0223 12:57:52.634057    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.634064    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.634070    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.635235    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:57:52.635247    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.635253    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.635259    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.635264    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.635270    7621 round_trippers.go:580]     Content-Length: 263
	I0223 12:57:52.635274    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.635280    7621 round_trippers.go:580]     Audit-Id: 7ccf1e6f-08a5-4c76-9dab-92bdd8b4242d
	I0223 12:57:52.635285    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.635294    7621 request.go:1171] Response Body: {
	  "major": "1",
	  "minor": "26",
	  "gitVersion": "v1.26.1",
	  "gitCommit": "8f94681cd294aa8cfd3407b8191f6c70214973a4",
	  "gitTreeState": "clean",
	  "buildDate": "2023-01-18T15:51:25Z",
	  "goVersion": "go1.19.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0223 12:57:52.635337    7621 api_server.go:140] control plane version: v1.26.1
	I0223 12:57:52.635344    7621 api_server.go:130] duration metric: took 7.339719ms to wait for apiserver health ...
	I0223 12:57:52.635348    7621 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 12:57:52.815960    7621 request.go:622] Waited for 180.563441ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:52.816069    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:52.816081    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:52.816094    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:52.816106    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:52.821756    7621 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0223 12:57:52.821773    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:52.821783    7621 round_trippers.go:580]     Audit-Id: e8f493f6-e476-42c9-a1c5-2a0d9b2068d1
	I0223 12:57:52.821790    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:52.821800    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:52.821805    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:52.821810    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:52.821815    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:52 GMT
	I0223 12:57:52.822596    7621 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 12:57:52.823871    7621 system_pods.go:59] 8 kube-system pods found
	I0223 12:57:52.823884    7621 system_pods.go:61] "coredns-787d4945fb-255qk" [b14a01e5-36d7-4404-9478-12ce93233303] Running
	I0223 12:57:52.823890    7621 system_pods.go:61] "etcd-multinode-899000" [04c36b20-3f1c-4967-be88-dfaf04e459fb] Running
	I0223 12:57:52.823894    7621 system_pods.go:61] "kindnet-gvns6" [4583b1ff-e149-4409-a263-2b75532c1b48] Running
	I0223 12:57:52.823898    7621 system_pods.go:61] "kube-apiserver-multinode-899000" [8f2e9b4f-7407-4a4f-86d7-cbaa54f4982b] Running
	I0223 12:57:52.823902    7621 system_pods.go:61] "kube-controller-manager-multinode-899000" [8a9821eb-106e-43fb-919d-59f0d6132887] Running
	I0223 12:57:52.823906    7621 system_pods.go:61] "kube-proxy-w885m" [9e1284e2-dcb3-408c-bc90-a501107f7e23] Running
	I0223 12:57:52.823910    7621 system_pods.go:61] "kube-scheduler-multinode-899000" [b864a38e-68d2-4949-92a9-0f736cbdf7fe] Running
	I0223 12:57:52.823914    7621 system_pods.go:61] "storage-provisioner" [1cdb29ef-26cb-4ab3-a7f9-c455dfda76d9] Running
	I0223 12:57:52.823918    7621 system_pods.go:74] duration metric: took 188.562695ms to wait for pod list to return data ...
	I0223 12:57:52.823925    7621 default_sa.go:34] waiting for default service account to be created ...
	I0223 12:57:53.015033    7621 request.go:622] Waited for 191.045495ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/default/serviceaccounts
	I0223 12:57:53.015118    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/default/serviceaccounts
	I0223 12:57:53.015126    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:53.015138    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:53.015149    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:53.019608    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:57:53.019621    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:53.019626    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:53.019631    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:53.019640    7621 round_trippers.go:580]     Content-Length: 261
	I0223 12:57:53.019644    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:53 GMT
	I0223 12:57:53.019650    7621 round_trippers.go:580]     Audit-Id: 112cfe23-2662-4a95-8c4b-64ece10582f0
	I0223 12:57:53.019657    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:53.019663    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:53.019676    7621 request.go:1171] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"0e104d57-3e04-4d24-8671-d465a61acfa7","resourceVersion":"312","creationTimestamp":"2023-02-23T20:57:35Z"}}]}
	I0223 12:57:53.019781    7621 default_sa.go:45] found service account: "default"
	I0223 12:57:53.019788    7621 default_sa.go:55] duration metric: took 195.854678ms for default service account to be created ...
	I0223 12:57:53.019793    7621 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 12:57:53.214058    7621 request.go:622] Waited for 194.226374ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:53.214093    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:57:53.214099    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:53.214111    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:53.214154    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:53.217977    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:53.217988    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:53.217994    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:53.217999    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:53.218008    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:53 GMT
	I0223 12:57:53.218014    7621 round_trippers.go:580]     Audit-Id: 1970a5d2-3fca-4726-9bc2-c0a7594f4d4e
	I0223 12:57:53.218019    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:53.218024    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:53.218698    7621 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 55202 chars]
	I0223 12:57:53.219999    7621 system_pods.go:86] 8 kube-system pods found
	I0223 12:57:53.220008    7621 system_pods.go:89] "coredns-787d4945fb-255qk" [b14a01e5-36d7-4404-9478-12ce93233303] Running
	I0223 12:57:53.220012    7621 system_pods.go:89] "etcd-multinode-899000" [04c36b20-3f1c-4967-be88-dfaf04e459fb] Running
	I0223 12:57:53.220016    7621 system_pods.go:89] "kindnet-gvns6" [4583b1ff-e149-4409-a263-2b75532c1b48] Running
	I0223 12:57:53.220020    7621 system_pods.go:89] "kube-apiserver-multinode-899000" [8f2e9b4f-7407-4a4f-86d7-cbaa54f4982b] Running
	I0223 12:57:53.220025    7621 system_pods.go:89] "kube-controller-manager-multinode-899000" [8a9821eb-106e-43fb-919d-59f0d6132887] Running
	I0223 12:57:53.220029    7621 system_pods.go:89] "kube-proxy-w885m" [9e1284e2-dcb3-408c-bc90-a501107f7e23] Running
	I0223 12:57:53.220032    7621 system_pods.go:89] "kube-scheduler-multinode-899000" [b864a38e-68d2-4949-92a9-0f736cbdf7fe] Running
	I0223 12:57:53.220038    7621 system_pods.go:89] "storage-provisioner" [1cdb29ef-26cb-4ab3-a7f9-c455dfda76d9] Running
	I0223 12:57:53.220044    7621 system_pods.go:126] duration metric: took 200.242956ms to wait for k8s-apps to be running ...
	I0223 12:57:53.220051    7621 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 12:57:53.220107    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:57:53.229785    7621 system_svc.go:56] duration metric: took 9.728758ms WaitForService to wait for kubelet.
	I0223 12:57:53.229797    7621 kubeadm.go:578] duration metric: took 16.665310693s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 12:57:53.229810    7621 node_conditions.go:102] verifying NodePressure condition ...
	I0223 12:57:53.414181    7621 request.go:622] Waited for 184.278772ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes
	I0223 12:57:53.414239    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes
	I0223 12:57:53.414247    7621 round_trippers.go:469] Request Headers:
	I0223 12:57:53.414260    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:57:53.414271    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:57:53.418096    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:57:53.418113    7621 round_trippers.go:577] Response Headers:
	I0223 12:57:53.418121    7621 round_trippers.go:580]     Audit-Id: 0855f447-43ce-46da-ad93-3fbe83589606
	I0223 12:57:53.418128    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:57:53.418134    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:57:53.418141    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:57:53.418149    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:57:53.418155    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:57:53 GMT
	I0223 12:57:53.418255    7621 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"437"},"items":[{"metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"414","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5007 chars]
	I0223 12:57:53.418506    7621 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0223 12:57:53.418519    7621 node_conditions.go:123] node cpu capacity is 6
	I0223 12:57:53.418528    7621 node_conditions.go:105] duration metric: took 188.711451ms to run NodePressure ...
	I0223 12:57:53.418535    7621 start.go:228] waiting for startup goroutines ...
	I0223 12:57:53.418541    7621 start.go:233] waiting for cluster config update ...
	I0223 12:57:53.418551    7621 start.go:242] writing updated cluster config ...
	I0223 12:57:53.440182    7621 out.go:177] 
	I0223 12:57:53.462685    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:57:53.462788    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:57:53.485351    7621 out.go:177] * Starting worker node multinode-899000-m02 in cluster multinode-899000
	I0223 12:57:53.506982    7621 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 12:57:53.528371    7621 out.go:177] * Pulling base image ...
	I0223 12:57:53.571022    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:57:53.571055    7621 cache.go:57] Caching tarball of preloaded images
	I0223 12:57:53.571092    7621 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 12:57:53.571236    7621 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 12:57:53.571256    7621 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 12:57:53.571369    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:57:53.630104    7621 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 12:57:53.630125    7621 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 12:57:53.630147    7621 cache.go:193] Successfully downloaded all kic artifacts
	I0223 12:57:53.630187    7621 start.go:364] acquiring machines lock for multinode-899000-m02: {Name:mk5c03a1afa4b7b0e0a809f52d581925fe861d81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 12:57:53.630481    7621 start.go:368] acquired machines lock for "multinode-899000-m02" in 282.935µs
	I0223 12:57:53.630511    7621 start.go:93] Provisioning new machine with config: &{Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 12:57:53.630574    7621 start.go:125] createHost starting for "m02" (driver="docker")
	I0223 12:57:53.652550    7621 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 12:57:53.652798    7621 start.go:159] libmachine.API.Create for "multinode-899000" (driver="docker")
	I0223 12:57:53.652835    7621 client.go:168] LocalClient.Create starting
	I0223 12:57:53.653017    7621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 12:57:53.653098    7621 main.go:141] libmachine: Decoding PEM data...
	I0223 12:57:53.653125    7621 main.go:141] libmachine: Parsing certificate...
	I0223 12:57:53.653228    7621 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 12:57:53.653280    7621 main.go:141] libmachine: Decoding PEM data...
	I0223 12:57:53.653300    7621 main.go:141] libmachine: Parsing certificate...
	I0223 12:57:53.674643    7621 cli_runner.go:164] Run: docker network inspect multinode-899000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 12:57:53.731836    7621 network_create.go:76] Found existing network {name:multinode-899000 subnet:0xc0005581e0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 58 1] mtu:1500}
	I0223 12:57:53.731878    7621 kic.go:117] calculated static IP "192.168.58.3" for the "multinode-899000-m02" container
	I0223 12:57:53.732001    7621 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 12:57:53.787889    7621 cli_runner.go:164] Run: docker volume create multinode-899000-m02 --label name.minikube.sigs.k8s.io=multinode-899000-m02 --label created_by.minikube.sigs.k8s.io=true
	I0223 12:57:53.843787    7621 oci.go:103] Successfully created a docker volume multinode-899000-m02
	I0223 12:57:53.843920    7621 cli_runner.go:164] Run: docker run --rm --name multinode-899000-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-899000-m02 --entrypoint /usr/bin/test -v multinode-899000-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	I0223 12:57:54.283218    7621 oci.go:107] Successfully prepared a docker volume multinode-899000-m02
	I0223 12:57:54.283253    7621 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:57:54.283266    7621 kic.go:190] Starting extracting preloaded images to volume ...
	I0223 12:57:54.283379    7621 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-899000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir
	I0223 12:58:00.609103    7621 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-899000-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -I lz4 -xf /preloaded.tar -C /extractDir: (6.325541285s)
	I0223 12:58:00.609123    7621 kic.go:199] duration metric: took 6.325740 seconds to extract preloaded images to volume
	I0223 12:58:00.609225    7621 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0223 12:58:00.749644    7621 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-899000-m02 --name multinode-899000-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-899000-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-899000-m02 --network multinode-899000 --ip 192.168.58.3 --volume multinode-899000-m02:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc
	I0223 12:58:01.114569    7621 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Running}}
	I0223 12:58:01.179142    7621 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Status}}
	I0223 12:58:01.242270    7621 cli_runner.go:164] Run: docker exec multinode-899000-m02 stat /var/lib/dpkg/alternatives/iptables
	I0223 12:58:01.358626    7621 oci.go:144] the created container "multinode-899000-m02" has a running status.
	I0223 12:58:01.358651    7621 kic.go:221] Creating ssh key for kic: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa...
	I0223 12:58:01.597296    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0223 12:58:01.597354    7621 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0223 12:58:01.698107    7621 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Status}}
	I0223 12:58:01.755893    7621 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0223 12:58:01.755914    7621 kic_runner.go:114] Args: [docker exec --privileged multinode-899000-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0223 12:58:01.855680    7621 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Status}}
	I0223 12:58:01.912416    7621 machine.go:88] provisioning docker machine ...
	I0223 12:58:01.912458    7621 ubuntu.go:169] provisioning hostname "multinode-899000-m02"
	I0223 12:58:01.912554    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:01.970487    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:01.970880    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:01.970890    7621 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-899000-m02 && echo "multinode-899000-m02" | sudo tee /etc/hostname
	I0223 12:58:02.113684    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-899000-m02
	
	I0223 12:58:02.113789    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:02.170964    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:02.171323    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:02.171336    7621 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-899000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-899000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-899000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 12:58:02.304072    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 12:58:02.304095    7621 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15909-825/.minikube CaCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15909-825/.minikube}
	I0223 12:58:02.304104    7621 ubuntu.go:177] setting up certificates
	I0223 12:58:02.304110    7621 provision.go:83] configureAuth start
	I0223 12:58:02.304186    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000-m02
	I0223 12:58:02.362032    7621 provision.go:138] copyHostCerts
	I0223 12:58:02.362087    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:58:02.362146    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem, removing ...
	I0223 12:58:02.362152    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem
	I0223 12:58:02.362253    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/ca.pem (1078 bytes)
	I0223 12:58:02.362425    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:58:02.362461    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem, removing ...
	I0223 12:58:02.362466    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem
	I0223 12:58:02.362524    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/cert.pem (1123 bytes)
	I0223 12:58:02.362643    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:58:02.362672    7621 exec_runner.go:144] found /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem, removing ...
	I0223 12:58:02.362677    7621 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem
	I0223 12:58:02.362748    7621 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15909-825/.minikube/key.pem (1675 bytes)
	I0223 12:58:02.362869    7621 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem org=jenkins.multinode-899000-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-899000-m02]
	I0223 12:58:02.430743    7621 provision.go:172] copyRemoteCerts
	I0223 12:58:02.430801    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 12:58:02.430865    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:02.488615    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:02.583507    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0223 12:58:02.583585    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0223 12:58:02.601383    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0223 12:58:02.601471    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0223 12:58:02.618489    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0223 12:58:02.618586    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 12:58:02.635752    7621 provision.go:86] duration metric: configureAuth took 331.62292ms
	I0223 12:58:02.635768    7621 ubuntu.go:193] setting minikube options for container-runtime
	I0223 12:58:02.635922    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:58:02.635997    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:02.693216    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:02.693572    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:02.693582    7621 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0223 12:58:02.827446    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0223 12:58:02.827458    7621 ubuntu.go:71] root file system type: overlay
	I0223 12:58:02.827557    7621 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I0223 12:58:02.827635    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:02.885944    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:02.886302    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:02.886359    7621 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0223 12:58:03.029517    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0223 12:58:03.029624    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:03.088025    7621 main.go:141] libmachine: Using SSH client type: native
	I0223 12:58:03.088389    7621 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x140f720] 0x1412660 <nil>  [] 0s} 127.0.0.1 51172 <nil> <nil>}
	I0223 12:58:03.088403    7621 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0223 12:58:03.731596    7621 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2023-02-09 19:46:56.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 20:58:03.027503949 +0000
	@@ -1,30 +1,33 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Environment=NO_PROXY=192.168.58.2
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +35,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0223 12:58:03.731621    7621 machine.go:91] provisioned docker machine in 1.819152297s
	I0223 12:58:03.731628    7621 client.go:171] LocalClient.Create took 10.078604193s
	I0223 12:58:03.731643    7621 start.go:167] duration metric: libmachine.API.Create for "multinode-899000" took 10.078665842s
	I0223 12:58:03.731649    7621 start.go:300] post-start starting for "multinode-899000-m02" (driver="docker")
	I0223 12:58:03.731653    7621 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 12:58:03.731739    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 12:58:03.731794    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:03.789461    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:03.884866    7621 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 12:58:03.888318    7621 command_runner.go:130] > NAME="Ubuntu"
	I0223 12:58:03.888327    7621 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I0223 12:58:03.888331    7621 command_runner.go:130] > ID=ubuntu
	I0223 12:58:03.888352    7621 command_runner.go:130] > ID_LIKE=debian
	I0223 12:58:03.888357    7621 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I0223 12:58:03.888360    7621 command_runner.go:130] > VERSION_ID="20.04"
	I0223 12:58:03.888365    7621 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0223 12:58:03.888369    7621 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0223 12:58:03.888374    7621 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0223 12:58:03.888388    7621 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0223 12:58:03.888394    7621 command_runner.go:130] > VERSION_CODENAME=focal
	I0223 12:58:03.888399    7621 command_runner.go:130] > UBUNTU_CODENAME=focal
	I0223 12:58:03.888452    7621 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0223 12:58:03.888463    7621 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0223 12:58:03.888486    7621 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0223 12:58:03.888493    7621 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I0223 12:58:03.888499    7621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/addons for local assets ...
	I0223 12:58:03.888588    7621 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15909-825/.minikube/files for local assets ...
	I0223 12:58:03.888742    7621 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> 20572.pem in /etc/ssl/certs
	I0223 12:58:03.888750    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /etc/ssl/certs/20572.pem
	I0223 12:58:03.888923    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 12:58:03.896128    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:58:03.912955    7621 start.go:303] post-start completed in 181.294649ms
	I0223 12:58:03.913482    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000-m02
	I0223 12:58:03.969733    7621 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/config.json ...
	I0223 12:58:03.970137    7621 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 12:58:03.970191    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:04.028655    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:04.119853    7621 command_runner.go:130] > 6%!
	(MISSING)I0223 12:58:04.119927    7621 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 12:58:04.124350    7621 command_runner.go:130] > 99G
	I0223 12:58:04.124620    7621 start.go:128] duration metric: createHost completed in 10.49384776s
	I0223 12:58:04.124634    7621 start.go:83] releasing machines lock for "multinode-899000-m02", held for 10.49395392s
	I0223 12:58:04.124724    7621 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000-m02
	I0223 12:58:04.204672    7621 out.go:177] * Found network options:
	I0223 12:58:04.226041    7621 out.go:177]   - NO_PROXY=192.168.58.2
	W0223 12:58:04.247484    7621 proxy.go:119] fail to check proxy env: Error ip not in block
	W0223 12:58:04.247521    7621 proxy.go:119] fail to check proxy env: Error ip not in block
	I0223 12:58:04.247640    7621 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0223 12:58:04.247647    7621 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0223 12:58:04.247712    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:04.247723    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:58:04.308866    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:04.308860    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:58:04.455176    7621 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0223 12:58:04.455202    7621 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0223 12:58:04.455208    7621 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0223 12:58:04.455213    7621 command_runner.go:130] > Device: 10001ch/1048604d	Inode: 2229761     Links: 1
	I0223 12:58:04.455219    7621 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 12:58:04.455225    7621 command_runner.go:130] > Access: 2023-01-10 16:48:19.000000000 +0000
	I0223 12:58:04.455231    7621 command_runner.go:130] > Modify: 2023-01-10 16:48:19.000000000 +0000
	I0223 12:58:04.455236    7621 command_runner.go:130] > Change: 2023-02-23 20:33:52.692471760 +0000
	I0223 12:58:04.455239    7621 command_runner.go:130] >  Birth: -
	I0223 12:58:04.455315    7621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0223 12:58:04.476107    7621 cni.go:229] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0223 12:58:04.476177    7621 ssh_runner.go:195] Run: which cri-dockerd
	I0223 12:58:04.480145    7621 command_runner.go:130] > /usr/bin/cri-dockerd
	I0223 12:58:04.480231    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0223 12:58:04.487548    7621 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (135 bytes)
	I0223 12:58:04.500409    7621 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 12:58:04.514526    7621 command_runner.go:139] > /etc/cni/net.d/100-crio-bridge.conf, 
	I0223 12:58:04.514563    7621 cni.go:261] disabled [/etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0223 12:58:04.514571    7621 start.go:485] detecting cgroup driver to use...
	I0223 12:58:04.514581    7621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:58:04.514653    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:58:04.527017    7621 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0223 12:58:04.527031    7621 command_runner.go:130] > image-endpoint: unix:///run/containerd/containerd.sock
	I0223 12:58:04.527853    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0223 12:58:04.536504    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 12:58:04.544895    7621 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 12:58:04.544956    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 12:58:04.553381    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:58:04.561798    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 12:58:04.570360    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 12:58:04.578952    7621 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 12:58:04.586968    7621 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 12:58:04.595681    7621 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 12:58:04.602353    7621 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0223 12:58:04.603050    7621 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 12:58:04.610377    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:58:04.686472    7621 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 12:58:04.760664    7621 start.go:485] detecting cgroup driver to use...
	I0223 12:58:04.760684    7621 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0223 12:58:04.760748    7621 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0223 12:58:04.770459    7621 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0223 12:58:04.770528    7621 command_runner.go:130] > [Unit]
	I0223 12:58:04.770543    7621 command_runner.go:130] > Description=Docker Application Container Engine
	I0223 12:58:04.770565    7621 command_runner.go:130] > Documentation=https://docs.docker.com
	I0223 12:58:04.770578    7621 command_runner.go:130] > BindsTo=containerd.service
	I0223 12:58:04.770586    7621 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0223 12:58:04.770594    7621 command_runner.go:130] > Wants=network-online.target
	I0223 12:58:04.770604    7621 command_runner.go:130] > Requires=docker.socket
	I0223 12:58:04.770611    7621 command_runner.go:130] > StartLimitBurst=3
	I0223 12:58:04.770616    7621 command_runner.go:130] > StartLimitIntervalSec=60
	I0223 12:58:04.770621    7621 command_runner.go:130] > [Service]
	I0223 12:58:04.770626    7621 command_runner.go:130] > Type=notify
	I0223 12:58:04.770629    7621 command_runner.go:130] > Restart=on-failure
	I0223 12:58:04.770633    7621 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I0223 12:58:04.770641    7621 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0223 12:58:04.770652    7621 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0223 12:58:04.770657    7621 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0223 12:58:04.770663    7621 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0223 12:58:04.770669    7621 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0223 12:58:04.770676    7621 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0223 12:58:04.770683    7621 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0223 12:58:04.770693    7621 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0223 12:58:04.770701    7621 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0223 12:58:04.770704    7621 command_runner.go:130] > ExecStart=
	I0223 12:58:04.770718    7621 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0223 12:58:04.770723    7621 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0223 12:58:04.770728    7621 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0223 12:58:04.770735    7621 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0223 12:58:04.770744    7621 command_runner.go:130] > LimitNOFILE=infinity
	I0223 12:58:04.770749    7621 command_runner.go:130] > LimitNPROC=infinity
	I0223 12:58:04.770754    7621 command_runner.go:130] > LimitCORE=infinity
	I0223 12:58:04.770758    7621 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0223 12:58:04.770762    7621 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0223 12:58:04.770766    7621 command_runner.go:130] > TasksMax=infinity
	I0223 12:58:04.770769    7621 command_runner.go:130] > TimeoutStartSec=0
	I0223 12:58:04.770774    7621 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0223 12:58:04.770778    7621 command_runner.go:130] > Delegate=yes
	I0223 12:58:04.770788    7621 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0223 12:58:04.770792    7621 command_runner.go:130] > KillMode=process
	I0223 12:58:04.770795    7621 command_runner.go:130] > [Install]
	I0223 12:58:04.770799    7621 command_runner.go:130] > WantedBy=multi-user.target
	I0223 12:58:04.771410    7621 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I0223 12:58:04.771475    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 12:58:04.781576    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 12:58:04.794326    7621 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 12:58:04.794339    7621 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I0223 12:58:04.795123    7621 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0223 12:58:04.875293    7621 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0223 12:58:04.970597    7621 docker.go:529] configuring docker to use "cgroupfs" as cgroup driver...
	I0223 12:58:04.970615    7621 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (144 bytes)
	I0223 12:58:04.983789    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:58:05.072300    7621 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0223 12:58:05.292127    7621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 12:58:05.367710    7621 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0223 12:58:05.367778    7621 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0223 12:58:05.433730    7621 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0223 12:58:05.505014    7621 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 12:58:05.580488    7621 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0223 12:58:05.609862    7621 start.go:532] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0223 12:58:05.609951    7621 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0223 12:58:05.614176    7621 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0223 12:58:05.614187    7621 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0223 12:58:05.614195    7621 command_runner.go:130] > Device: 100024h/1048612d	Inode: 206         Links: 1
	I0223 12:58:05.614202    7621 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0223 12:58:05.614211    7621 command_runner.go:130] > Access: 2023-02-23 20:58:05.588503925 +0000
	I0223 12:58:05.614217    7621 command_runner.go:130] > Modify: 2023-02-23 20:58:05.588503925 +0000
	I0223 12:58:05.614221    7621 command_runner.go:130] > Change: 2023-02-23 20:58:05.606503924 +0000
	I0223 12:58:05.614226    7621 command_runner.go:130] >  Birth: -
	I0223 12:58:05.614246    7621 start.go:553] Will wait 60s for crictl version
	I0223 12:58:05.614285    7621 ssh_runner.go:195] Run: which crictl
	I0223 12:58:05.618012    7621 command_runner.go:130] > /usr/bin/crictl
	I0223 12:58:05.618085    7621 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 12:58:05.713742    7621 command_runner.go:130] > Version:  0.1.0
	I0223 12:58:05.713755    7621 command_runner.go:130] > RuntimeName:  docker
	I0223 12:58:05.713759    7621 command_runner.go:130] > RuntimeVersion:  23.0.1
	I0223 12:58:05.713766    7621 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0223 12:58:05.715686    7621 start.go:569] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  23.0.1
	RuntimeApiVersion:  v1alpha2
	I0223 12:58:05.715762    7621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:58:05.738599    7621 command_runner.go:130] > 23.0.1
	I0223 12:58:05.740332    7621 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0223 12:58:05.763944    7621 command_runner.go:130] > 23.0.1
	I0223 12:58:05.809278    7621 out.go:204] * Preparing Kubernetes v1.26.1 on Docker 23.0.1 ...
	I0223 12:58:05.831233    7621 out.go:177]   - env NO_PROXY=192.168.58.2
	I0223 12:58:05.853423    7621 cli_runner.go:164] Run: docker exec -t multinode-899000-m02 dig +short host.docker.internal
	I0223 12:58:05.968826    7621 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I0223 12:58:05.968940    7621 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I0223 12:58:05.973308    7621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:58:05.983443    7621 certs.go:56] Setting up /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000 for IP: 192.168.58.3
	I0223 12:58:05.983459    7621 certs.go:186] acquiring lock for shared ca certs: {Name:mk9b7a98958f4333f06cfa6d87963d4d7f2b94cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:58:05.983636    7621 certs.go:195] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key
	I0223 12:58:05.983693    7621 certs.go:195] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key
	I0223 12:58:05.983703    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0223 12:58:05.983725    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0223 12:58:05.983744    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0223 12:58:05.983763    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0223 12:58:05.983846    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem (1338 bytes)
	W0223 12:58:05.983887    7621 certs.go:397] ignoring /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057_empty.pem, impossibly tiny 0 bytes
	I0223 12:58:05.983897    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 12:58:05.983941    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem (1078 bytes)
	I0223 12:58:05.983981    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem (1123 bytes)
	I0223 12:58:05.984022    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/Users/jenkins/minikube-integration/15909-825/.minikube/certs/key.pem (1675 bytes)
	I0223 12:58:05.984103    7621 certs.go:401] found cert: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem (1708 bytes)
	I0223 12:58:05.984136    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem -> /usr/share/ca-certificates/20572.pem
	I0223 12:58:05.984157    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:05.984175    7621 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem -> /usr/share/ca-certificates/2057.pem
	I0223 12:58:05.984499    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 12:58:06.001954    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0223 12:58:06.018933    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 12:58:06.036127    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 12:58:06.053361    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/ssl/certs/20572.pem --> /usr/share/ca-certificates/20572.pem (1708 bytes)
	I0223 12:58:06.070584    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 12:58:06.087761    7621 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15909-825/.minikube/certs/2057.pem --> /usr/share/ca-certificates/2057.pem (1338 bytes)
	I0223 12:58:06.105036    7621 ssh_runner.go:195] Run: openssl version
	I0223 12:58:06.110267    7621 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I0223 12:58:06.110537    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 12:58:06.118593    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:06.122428    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:06.122444    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 20:34 /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:06.122493    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 12:58:06.127668    7621 command_runner.go:130] > b5213941
	I0223 12:58:06.128051    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 12:58:06.135997    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2057.pem && ln -fs /usr/share/ca-certificates/2057.pem /etc/ssl/certs/2057.pem"
	I0223 12:58:06.143987    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2057.pem
	I0223 12:58:06.147936    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:58:06.148008    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 20:39 /usr/share/ca-certificates/2057.pem
	I0223 12:58:06.148068    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2057.pem
	I0223 12:58:06.153153    7621 command_runner.go:130] > 51391683
	I0223 12:58:06.153487    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2057.pem /etc/ssl/certs/51391683.0"
	I0223 12:58:06.161545    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/20572.pem && ln -fs /usr/share/ca-certificates/20572.pem /etc/ssl/certs/20572.pem"
	I0223 12:58:06.169703    7621 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/20572.pem
	I0223 12:58:06.173494    7621 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:58:06.173519    7621 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 20:39 /usr/share/ca-certificates/20572.pem
	I0223 12:58:06.173562    7621 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/20572.pem
	I0223 12:58:06.178838    7621 command_runner.go:130] > 3ec20f2e
	I0223 12:58:06.179062    7621 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/20572.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 12:58:06.187248    7621 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0223 12:58:06.209788    7621 command_runner.go:130] > cgroupfs
	I0223 12:58:06.211461    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:58:06.211475    7621 cni.go:136] 2 nodes found, recommending kindnet
	I0223 12:58:06.211483    7621 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 12:58:06.211498    7621 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.26.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-899000 NodeName:multinode-899000-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 12:58:06.211590    7621 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-899000-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.26.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 12:58:06.211640    7621 kubeadm.go:968] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.26.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-899000-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 12:58:06.211706    7621 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.26.1
	I0223 12:58:06.218964    7621 command_runner.go:130] > kubeadm
	I0223 12:58:06.218973    7621 command_runner.go:130] > kubectl
	I0223 12:58:06.218977    7621 command_runner.go:130] > kubelet
	I0223 12:58:06.219673    7621 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 12:58:06.219735    7621 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0223 12:58:06.227058    7621 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (452 bytes)
	I0223 12:58:06.239739    7621 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 12:58:06.252910    7621 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0223 12:58:06.256788    7621 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 12:58:06.266829    7621 host.go:66] Checking if "multinode-899000" exists ...
	I0223 12:58:06.267002    7621 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:58:06.267014    7621 start.go:301] JoinCluster: &{Name:multinode-899000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:multinode-899000 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:58:06.267068    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0223 12:58:06.267120    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:58:06.326048    7621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:58:06.487311    7621 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 898us1.ihys6g4jwq17jiqx --discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 
	I0223 12:58:06.487352    7621 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 12:58:06.487370    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 898us1.ihys6g4jwq17jiqx --discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899000-m02"
	I0223 12:58:06.528112    7621 command_runner.go:130] > [preflight] Running pre-flight checks
	I0223 12:58:06.638474    7621 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0223 12:58:06.638493    7621 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0223 12:58:06.662311    7621 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0223 12:58:06.662325    7621 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0223 12:58:06.662330    7621 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0223 12:58:06.733207    7621 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0223 12:58:20.247375    7621 command_runner.go:130] > This node has joined the cluster:
	I0223 12:58:20.247395    7621 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0223 12:58:20.247403    7621 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0223 12:58:20.247412    7621 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0223 12:58:20.250700    7621 command_runner.go:130] ! W0223 20:58:06.527320    1238 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I0223 12:58:20.250718    7621 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I0223 12:58:20.250730    7621 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0223 12:58:20.250745    7621 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.26.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 898us1.ihys6g4jwq17jiqx --discovery-token-ca-cert-hash sha256:a63362282022fef2dce9e887fad417ce5ac5a6d49146435fc145c8693c619413 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-899000-m02": (13.763113582s)
	I0223 12:58:20.250764    7621 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0223 12:58:20.395854    7621 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /lib/systemd/system/kubelet.service.
	I0223 12:58:20.395873    7621 start.go:303] JoinCluster complete in 14.128603796s
	I0223 12:58:20.395881    7621 cni.go:84] Creating CNI manager for ""
	I0223 12:58:20.395886    7621 cni.go:136] 2 nodes found, recommending kindnet
	I0223 12:58:20.395975    7621 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0223 12:58:20.399941    7621 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0223 12:58:20.399951    7621 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I0223 12:58:20.399960    7621 command_runner.go:130] > Device: a6h/166d	Inode: 2102733     Links: 1
	I0223 12:58:20.399965    7621 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0223 12:58:20.399973    7621 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I0223 12:58:20.399978    7621 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I0223 12:58:20.399982    7621 command_runner.go:130] > Change: 2023-02-23 20:33:51.991471766 +0000
	I0223 12:58:20.399991    7621 command_runner.go:130] >  Birth: -
	I0223 12:58:20.400070    7621 cni.go:181] applying CNI manifest using /var/lib/minikube/binaries/v1.26.1/kubectl ...
	I0223 12:58:20.400080    7621 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2428 bytes)
	I0223 12:58:20.413157    7621 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.26.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0223 12:58:20.601827    7621 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0223 12:58:20.604293    7621 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0223 12:58:20.606049    7621 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0223 12:58:20.614565    7621 command_runner.go:130] > daemonset.apps/kindnet configured
	I0223 12:58:20.621583    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:58:20.621803    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:58:20.622052    7621 round_trippers.go:463] GET https://127.0.0.1:51104/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0223 12:58:20.622058    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:20.622065    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:20.622070    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:20.624633    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:20.624643    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:20.624649    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:20.624655    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:20.624662    7621 round_trippers.go:580]     Content-Length: 291
	I0223 12:58:20.624667    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:20 GMT
	I0223 12:58:20.624673    7621 round_trippers.go:580]     Audit-Id: 67222638-afd0-4c38-84b6-7f76484aec80
	I0223 12:58:20.624678    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:20.624683    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:20.624696    7621 request.go:1171] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"baeff9f2-c3e7-4199-951b-f85fdcaddbe8","resourceVersion":"436","creationTimestamp":"2023-02-23T20:57:22Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0223 12:58:20.624738    7621 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-899000" context rescaled to 1 replicas
	I0223 12:58:20.624752    7621 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0223 12:58:20.647179    7621 out.go:177] * Verifying Kubernetes components...
	I0223 12:58:20.688288    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:58:20.700123    7621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:58:20.759136    7621 loader.go:373] Config loaded from file:  /Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:58:20.759372    7621 kapi.go:59] client config for multinode-899000: &rest.Config{Host:"https://127.0.0.1:51104", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/profiles/multinode-899000/client.key", CAFile:"/Users/jenkins/minikube-integration/15909-825/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos
:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x25472a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 12:58:20.759603    7621 node_ready.go:35] waiting up to 6m0s for node "multinode-899000-m02" to be "Ready" ...
	I0223 12:58:20.759643    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:20.759647    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:20.759654    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:20.759659    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:20.762524    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:20.762539    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:20.762546    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:20.762551    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:20.762556    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:20 GMT
	I0223 12:58:20.762562    7621 round_trippers.go:580]     Audit-Id: 8634821f-f6a2-4fcd-8192-70855326ddcd
	I0223 12:58:20.762567    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:20.762572    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:20.762649    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"481","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58
:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations" [truncated 3841 chars]
	I0223 12:58:21.263739    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:21.263760    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:21.263772    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:21.263782    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:21.267075    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:21.267092    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:21.267100    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:21 GMT
	I0223 12:58:21.267108    7621 round_trippers.go:580]     Audit-Id: 233e4302-1887-456a-90e8-1a49f891fccd
	I0223 12:58:21.267131    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:21.267136    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:21.267142    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:21.267146    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:21.267208    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"481","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58
:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations" [truncated 3841 chars]
	I0223 12:58:21.763599    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:21.763625    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:21.763637    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:21.763740    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:21.767608    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:21.767620    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:21.767626    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:21.767631    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:21 GMT
	I0223 12:58:21.767636    7621 round_trippers.go:580]     Audit-Id: 260715fd-ae7a-4a7e-a346-9a7f64a75ed4
	I0223 12:58:21.767641    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:21.767646    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:21.767651    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:21.767721    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"481","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}},"f:taints":{}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58
:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations" [truncated 3841 chars]
	I0223 12:58:22.263413    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:22.281474    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.281490    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.281512    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.285120    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:22.285135    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.285143    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.285179    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.285196    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.285208    7621 round_trippers.go:580]     Audit-Id: cb08a44f-4a0c-4330-8e85-d3367c73fc0f
	I0223 12:58:22.285218    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.285228    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.285643    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:22.285851    7621 node_ready.go:49] node "multinode-899000-m02" has status "Ready":"True"
	I0223 12:58:22.285862    7621 node_ready.go:38] duration metric: took 1.526223237s waiting for node "multinode-899000-m02" to be "Ready" ...
	I0223 12:58:22.285869    7621 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 12:58:22.285912    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods
	I0223 12:58:22.285917    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.285923    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.285928    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.289467    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:22.289486    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.289495    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.289503    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.289510    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.289523    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.289534    7621 round_trippers.go:580]     Audit-Id: 0fa86289-091a-4f86-b936-ad688159d7dc
	I0223 12:58:22.289543    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.290808    7621 request.go:1171] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"489"},"items":[{"metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 68605 chars]
	I0223 12:58:22.292404    7621 pod_ready.go:78] waiting up to 6m0s for pod "coredns-787d4945fb-255qk" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.292441    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/coredns-787d4945fb-255qk
	I0223 12:58:22.292446    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.292453    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.292459    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.294734    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.294743    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.294748    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.294753    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.294759    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.294766    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.294771    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.294776    7621 round_trippers.go:580]     Audit-Id: 8d9614e6-a3aa-4f15-a23f-a07d69b29326
	I0223 12:58:22.294870    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-787d4945fb-255qk","generateName":"coredns-787d4945fb-","namespace":"kube-system","uid":"b14a01e5-36d7-4404-9478-12ce93233303","resourceVersion":"432","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"787d4945fb"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-787d4945fb","uid":"5b025404-d182-4142-b75d-8a452fddeda4","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5b025404-d182-4142-b75d-8a452fddeda4\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I0223 12:58:22.295132    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:22.295138    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.295144    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.295150    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.297169    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.297178    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.297183    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.297188    7621 round_trippers.go:580]     Audit-Id: 813e9c20-c4df-4924-9883-44e58a351344
	I0223 12:58:22.297193    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.297198    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.297203    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.297208    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.297264    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:22.297447    7621 pod_ready.go:92] pod "coredns-787d4945fb-255qk" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:22.297453    7621 pod_ready.go:81] duration metric: took 5.040627ms waiting for pod "coredns-787d4945fb-255qk" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.297458    7621 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.297500    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/etcd-multinode-899000
	I0223 12:58:22.297506    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.297512    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.297518    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.299533    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.299543    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.299550    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.299555    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.299561    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.299566    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.299571    7621 round_trippers.go:580]     Audit-Id: 83c24849-e5e1-411b-9327-17c90855767c
	I0223 12:58:22.299578    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.299648    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-899000","namespace":"kube-system","uid":"04c36b20-3f1c-4967-be88-dfaf04e459fb","resourceVersion":"273","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"566ae0c6f1e5eb2cbf1380e3d7174fa3","kubernetes.io/config.mirror":"566ae0c6f1e5eb2cbf1380e3d7174fa3","kubernetes.io/config.seen":"2023-02-23T20:57:22.892805434Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 5836 chars]
	I0223 12:58:22.299861    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:22.299867    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.299873    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.299889    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.301604    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:22.301613    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.301620    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.301625    7621 round_trippers.go:580]     Audit-Id: a54c5ced-d173-4e56-933c-c25de720af53
	I0223 12:58:22.301631    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.301636    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.301641    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.301646    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.301713    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:22.301882    7621 pod_ready.go:92] pod "etcd-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:22.301888    7621 pod_ready.go:81] duration metric: took 4.424711ms waiting for pod "etcd-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.301896    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.301922    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-899000
	I0223 12:58:22.301927    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.301933    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.301939    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.304007    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.304016    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.304021    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.304026    7621 round_trippers.go:580]     Audit-Id: 2ff56055-f0d7-4f5b-b20e-b2d0740dfd26
	I0223 12:58:22.304035    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.304041    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.304047    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.304053    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.304140    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-899000","namespace":"kube-system","uid":"8f2e9b4f-7407-4a4f-86d7-cbaa54f4982b","resourceVersion":"275","creationTimestamp":"2023-02-23T20:57:21Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"04b8445a9cf4f56fec75b4c565d27f23","kubernetes.io/config.mirror":"04b8445a9cf4f56fec75b4c565d27f23","kubernetes.io/config.seen":"2023-02-23T20:57:13.277278836Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8222 chars]
	I0223 12:58:22.304381    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:22.304387    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.304393    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.304398    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.306610    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.306619    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.306626    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.306632    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.306639    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.306644    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.306649    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.306654    7621 round_trippers.go:580]     Audit-Id: 1eac0b7b-1910-496b-a4bf-3d17e072d626
	I0223 12:58:22.306698    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:22.306872    7621 pod_ready.go:92] pod "kube-apiserver-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:22.306879    7621 pod_ready.go:81] duration metric: took 4.977088ms waiting for pod "kube-apiserver-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.306884    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.306911    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-899000
	I0223 12:58:22.306915    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.306921    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.306927    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.309090    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.309099    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.309106    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.309111    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.309117    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.309122    7621 round_trippers.go:580]     Audit-Id: 74dac49a-2231-4156-a29e-7edf55e4d2ac
	I0223 12:58:22.309127    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.309132    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.309295    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-899000","namespace":"kube-system","uid":"8a9821eb-106e-43fb-919d-59f0d6132887","resourceVersion":"301","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"02827c95207bba4f962be58bf081b453","kubernetes.io/config.mirror":"02827c95207bba4f962be58bf081b453","kubernetes.io/config.seen":"2023-02-23T20:57:22.892794347Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7797 chars]
	I0223 12:58:22.309545    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:22.309552    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.309559    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.309567    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.311558    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:22.311567    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.311573    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.311578    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.311584    7621 round_trippers.go:580]     Audit-Id: ffa67e40-e0c4-43d9-aa6c-3e693de04adc
	I0223 12:58:22.311597    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.311603    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.311608    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.311660    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:22.311869    7621 pod_ready.go:92] pod "kube-controller-manager-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:22.311875    7621 pod_ready.go:81] duration metric: took 4.985214ms waiting for pod "kube-controller-manager-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.311880    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s4pvs" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:22.463487    7621 request.go:622] Waited for 151.531938ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:22.463521    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:22.463525    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.463534    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.463547    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.466180    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:22.466203    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.466213    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.466219    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.466224    7621 round_trippers.go:580]     Audit-Id: e0b73299-3ede-4e4b-9370-6efa33f6aecc
	I0223 12:58:22.466230    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.466246    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.466257    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.466687    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:22.663549    7621 request.go:622] Waited for 196.554283ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:22.663644    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:22.663656    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:22.663672    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:22.663688    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:22.667542    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:22.667558    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:22.667569    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:22.667600    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:22.667612    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:22 GMT
	I0223 12:58:22.667618    7621 round_trippers.go:580]     Audit-Id: d523eead-0b5f-4ce1-911b-1926498d8550
	I0223 12:58:22.667625    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:22.667632    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:22.667711    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:23.169160    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:23.169176    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:23.169185    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:23.169192    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:23.172366    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:23.172379    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:23.172385    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:23.172391    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:23 GMT
	I0223 12:58:23.172396    7621 round_trippers.go:580]     Audit-Id: 2f163b7b-90fd-467d-ad4a-a387c8d49e2b
	I0223 12:58:23.172402    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:23.172407    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:23.172412    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:23.172471    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:23.172731    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:23.172738    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:23.172744    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:23.172749    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:23.174504    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:23.174516    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:23.174526    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:23.174532    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:23.174537    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:23 GMT
	I0223 12:58:23.174542    7621 round_trippers.go:580]     Audit-Id: ecfc0f37-e6b7-4ac6-8f1e-18862a85d247
	I0223 12:58:23.174554    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:23.174559    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:23.174703    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:23.670013    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:23.670038    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:23.670051    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:23.670061    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:23.674512    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:58:23.674525    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:23.674530    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:23 GMT
	I0223 12:58:23.674541    7621 round_trippers.go:580]     Audit-Id: 056aa36f-0c10-4ae0-9bf4-ca03416aa192
	I0223 12:58:23.674547    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:23.674551    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:23.674564    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:23.674569    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:23.674637    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:23.674900    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:23.674906    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:23.674912    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:23.674931    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:23.677078    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:23.677088    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:23.677094    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:23.677099    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:23 GMT
	I0223 12:58:23.677104    7621 round_trippers.go:580]     Audit-Id: ff8172ff-b1fe-4a8b-b7af-56374f7cdb48
	I0223 12:58:23.677109    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:23.677114    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:23.677120    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:23.677159    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:24.168156    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:24.168173    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:24.168194    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:24.168203    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:24.171200    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:24.171214    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:24.171221    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:24.171227    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:24.171233    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:24.171241    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:24 GMT
	I0223 12:58:24.171246    7621 round_trippers.go:580]     Audit-Id: 28a433ab-c23e-4cea-91d9-4cd9d5678c1c
	I0223 12:58:24.171251    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:24.171323    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:24.171729    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:24.171736    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:24.171742    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:24.171748    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:24.173941    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:24.173954    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:24.173961    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:24 GMT
	I0223 12:58:24.173966    7621 round_trippers.go:580]     Audit-Id: 923e8375-07b2-49ae-938b-d9fe78c92800
	I0223 12:58:24.173971    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:24.173976    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:24.173981    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:24.173986    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:24.174461    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:24.668503    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:24.668532    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:24.668547    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:24.668558    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:24.672135    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:24.672146    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:24.672151    7621 round_trippers.go:580]     Audit-Id: 399b9bee-22b3-40a3-81c4-511834fd3059
	I0223 12:58:24.672171    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:24.672182    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:24.672188    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:24.672193    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:24.672198    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:24 GMT
	I0223 12:58:24.672261    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:24.672510    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:24.672515    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:24.672521    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:24.672526    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:24.674668    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:24.674677    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:24.674683    7621 round_trippers.go:580]     Audit-Id: be7c54a9-26d0-4c17-82e3-057e89bf33af
	I0223 12:58:24.674688    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:24.674694    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:24.674699    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:24.674704    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:24.674709    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:24 GMT
	I0223 12:58:24.674760    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"489","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4134 chars]
	I0223 12:58:24.674928    7621 pod_ready.go:102] pod "kube-proxy-s4pvs" in "kube-system" namespace has status "Ready":"False"
	I0223 12:58:25.169265    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:25.169286    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:25.169297    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:25.169305    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:25.173798    7621 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0223 12:58:25.173815    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:25.173823    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:25.173830    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:25.173837    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:25.173845    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:25.173852    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:25 GMT
	I0223 12:58:25.173860    7621 round_trippers.go:580]     Audit-Id: 31c361dc-6fca-45f2-9337-041e6a2218c9
	I0223 12:58:25.173955    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:25.174291    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:25.174307    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:25.174317    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:25.174324    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:25.177082    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:25.177099    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:25.177110    7621 round_trippers.go:580]     Audit-Id: fb6c328b-26d6-4b6e-9966-0f8d2292d414
	I0223 12:58:25.177119    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:25.177132    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:25.177145    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:25.177156    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:25.177164    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:25 GMT
	I0223 12:58:25.177304    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"493","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4014 chars]
	I0223 12:58:25.670178    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:25.670203    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:25.670215    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:25.670226    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:25.674069    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:25.674081    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:25.674087    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:25.674093    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:25.674101    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:25.674108    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:25 GMT
	I0223 12:58:25.674113    7621 round_trippers.go:580]     Audit-Id: e714f7e2-ccbc-4485-9683-8c7dbe3439ae
	I0223 12:58:25.674118    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:25.674177    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:25.674456    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:25.674462    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:25.674468    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:25.674475    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:25.676559    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:25.676569    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:25.676574    7621 round_trippers.go:580]     Audit-Id: bc4c3a54-5c55-402a-bb8d-b407c0267a1b
	I0223 12:58:25.676580    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:25.676585    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:25.676590    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:25.676597    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:25.676602    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:25 GMT
	I0223 12:58:25.676648    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"493","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4014 chars]
	I0223 12:58:26.168415    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:26.168431    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.168440    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.168447    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.171570    7621 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0223 12:58:26.171583    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.171589    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.171601    7621 round_trippers.go:580]     Audit-Id: d70b06b6-a2e6-4916-a401-d314acfe5894
	I0223 12:58:26.171607    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.171612    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.171617    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.171622    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.171691    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"486","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5548 chars]
	I0223 12:58:26.171940    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:26.171946    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.171952    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.171957    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.174239    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.174249    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.174255    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.174260    7621 round_trippers.go:580]     Audit-Id: a66cc2c1-1124-4166-942e-c679f1ef9f61
	I0223 12:58:26.174267    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.174273    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.174279    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.174285    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.174338    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"493","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4014 chars]
	I0223 12:58:26.668178    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-s4pvs
	I0223 12:58:26.668193    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.668202    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.668209    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.671016    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.671026    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.671031    7621 round_trippers.go:580]     Audit-Id: d1826630-db1e-4ae5-a106-a40117931893
	I0223 12:58:26.671037    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.671043    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.671048    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.671053    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.671058    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.671114    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s4pvs","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a97c4b0-ae90-4c5b-bf47-3f67c0d63824","resourceVersion":"499","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5537 chars]
	I0223 12:58:26.671369    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000-m02
	I0223 12:58:26.671376    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.671384    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.671392    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.673360    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:26.673369    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.673375    7621 round_trippers.go:580]     Audit-Id: 2c082fdf-5659-4f79-bf1b-49ad416038a2
	I0223 12:58:26.673380    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.673385    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.673390    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.673396    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.673401    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.673448    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000-m02","uid":"5e5d30db-0b82-4cd6-a786-253e8b4b3bfa","resourceVersion":"493","creationTimestamp":"2023-02-23T20:58:19Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:58:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{
}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time [truncated 4014 chars]
	I0223 12:58:26.673605    7621 pod_ready.go:92] pod "kube-proxy-s4pvs" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:26.673615    7621 pod_ready.go:81] duration metric: took 4.361651931s waiting for pod "kube-proxy-s4pvs" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.673621    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-w885m" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.673649    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-proxy-w885m
	I0223 12:58:26.673660    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.673666    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.673672    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.675790    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.675799    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.675804    7621 round_trippers.go:580]     Audit-Id: 0c849d0d-2d41-474c-97e4-77c06ce32938
	I0223 12:58:26.675809    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.675814    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.675818    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.675823    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.675828    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.676087    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-w885m","generateName":"kube-proxy-","namespace":"kube-system","uid":"9e1284e2-dcb3-408c-bc90-a501107f7e23","resourceVersion":"397","creationTimestamp":"2023-02-23T20:57:35Z","labels":{"controller-revision-hash":"6bc4695d8c","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2c7cea1b-f706-4486-9e8e-1cf31ca6fcff\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5529 chars]
	I0223 12:58:26.676334    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:26.676340    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.676346    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.676352    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.678207    7621 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0223 12:58:26.678217    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.678222    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.678227    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.678232    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.678237    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.678242    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.678247    7621 round_trippers.go:580]     Audit-Id: b6a72a27-6d6f-4552-9a0b-c09c13cc1b60
	I0223 12:58:26.678297    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:26.678477    7621 pod_ready.go:92] pod "kube-proxy-w885m" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:26.678483    7621 pod_ready.go:81] duration metric: took 4.857735ms waiting for pod "kube-proxy-w885m" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.678489    7621 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.678516    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-899000
	I0223 12:58:26.678520    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.678525    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.678535    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.681020    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.681034    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.681043    7621 round_trippers.go:580]     Audit-Id: 9922716d-f364-401e-a315-abcb6d6ee5a1
	I0223 12:58:26.681049    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.681055    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.681068    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.681074    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.681079    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.681134    7621 request.go:1171] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-899000","namespace":"kube-system","uid":"b864a38e-68d2-4949-92a9-0f736cbdf7fe","resourceVersion":"296","creationTimestamp":"2023-02-23T20:57:23Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"bad6109cbec6cd514239122749558677","kubernetes.io/config.mirror":"bad6109cbec6cd514239122749558677","kubernetes.io/config.seen":"2023-02-23T20:57:22.892804438Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-02-23T20:57:23Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4679 chars]
	I0223 12:58:26.681340    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes/multinode-899000
	I0223 12:58:26.681347    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.681352    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.681358    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.683441    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.683450    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.683455    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.683462    7621 round_trippers.go:580]     Audit-Id: 84aa2aba-0871-4a2d-907f-4d1b1b3321fa
	I0223 12:58:26.683467    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.683472    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.683477    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.683482    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.683531    7621 request.go:1171] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2023-02-23T20:57:19Z","fieldsType":"FieldsV1","fi [truncated 5116 chars]
	I0223 12:58:26.683707    7621 pod_ready.go:92] pod "kube-scheduler-multinode-899000" in "kube-system" namespace has status "Ready":"True"
	I0223 12:58:26.683713    7621 pod_ready.go:81] duration metric: took 5.219031ms waiting for pod "kube-scheduler-multinode-899000" in "kube-system" namespace to be "Ready" ...
	I0223 12:58:26.683719    7621 pod_ready.go:38] duration metric: took 4.397762119s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 12:58:26.683729    7621 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 12:58:26.683790    7621 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:58:26.693999    7621 system_svc.go:56] duration metric: took 10.266093ms WaitForService to wait for kubelet.
	I0223 12:58:26.694012    7621 kubeadm.go:578] duration metric: took 6.0691297s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 12:58:26.694024    7621 node_conditions.go:102] verifying NodePressure condition ...
	I0223 12:58:26.864431    7621 request.go:622] Waited for 170.35072ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:51104/api/v1/nodes
	I0223 12:58:26.864456    7621 round_trippers.go:463] GET https://127.0.0.1:51104/api/v1/nodes
	I0223 12:58:26.864461    7621 round_trippers.go:469] Request Headers:
	I0223 12:58:26.864467    7621 round_trippers.go:473]     Accept: application/json, */*
	I0223 12:58:26.864480    7621 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0223 12:58:26.867099    7621 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0223 12:58:26.867110    7621 round_trippers.go:577] Response Headers:
	I0223 12:58:26.867116    7621 round_trippers.go:580]     Audit-Id: 8ac1b85e-7612-4e27-94c3-795258ab68fa
	I0223 12:58:26.867121    7621 round_trippers.go:580]     Cache-Control: no-cache, private
	I0223 12:58:26.867126    7621 round_trippers.go:580]     Content-Type: application/json
	I0223 12:58:26.867131    7621 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 8da2821b-b3fa-4abc-8389-66c19712f825
	I0223 12:58:26.867135    7621 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 3c9efaa3-52e7-4dfa-a83d-4f7ed5212837
	I0223 12:58:26.867141    7621 round_trippers.go:580]     Date: Thu, 23 Feb 2023 20:58:26 GMT
	I0223 12:58:26.867223    7621 request.go:1171] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"501"},"items":[{"metadata":{"name":"multinode-899000","uid":"138127bc-110a-49bc-80df-5a506115ecec","resourceVersion":"438","creationTimestamp":"2023-02-23T20:57:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-899000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"7816f70daabe48630c945a757f21bf8d759fce7d","minikube.k8s.io/name":"multinode-899000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_02_23T12_57_23_0700","minikube.k8s.io/version":"v1.29.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10175 chars]
	I0223 12:58:26.867530    7621 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0223 12:58:26.867542    7621 node_conditions.go:123] node cpu capacity is 6
	I0223 12:58:26.867548    7621 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I0223 12:58:26.867552    7621 node_conditions.go:123] node cpu capacity is 6
	I0223 12:58:26.867555    7621 node_conditions.go:105] duration metric: took 173.524522ms to run NodePressure ...
	I0223 12:58:26.867563    7621 start.go:228] waiting for startup goroutines ...
	I0223 12:58:26.867585    7621 start.go:242] writing updated cluster config ...
	I0223 12:58:26.867895    7621 ssh_runner.go:195] Run: rm -f paused
	I0223 12:58:26.906037    7621 start.go:555] kubectl: 1.25.4, cluster: 1.26.1 (minor skew: 1)
	I0223 12:58:26.950565    7621 out.go:177] * Done! kubectl is now configured to use "multinode-899000" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Thu 2023-02-23 20:57:05 UTC, end at Thu 2023-02-23 20:58:39 UTC. --
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317595816Z" level=info msg="[core] [Channel #4] original dial target is: \"unix:///run/containerd/containerd.sock\"" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317615669Z" level=info msg="[core] [Channel #4] parsed dial target is: {Scheme:unix Authority: Endpoint:run/containerd/containerd.sock URL:{Scheme:unix Opaque: User: Host: Path:/run/containerd/containerd.sock RawPath: OmitHost:false ForceQuery:false RawQuery: Fragment: RawFragment:}}" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317624695Z" level=info msg="[core] [Channel #4] Channel authority set to \"localhost\"" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317678328Z" level=info msg="[core] [Channel #4] Resolver state updated: {\n  \"Addresses\": [\n    {\n      \"Addr\": \"/run/containerd/containerd.sock\",\n      \"ServerName\": \"\",\n      \"Attributes\": {},\n      \"BalancerAttributes\": null,\n      \"Type\": 0,\n      \"Metadata\": null\n    }\n  ],\n  \"ServiceConfig\": null,\n  \"Attributes\": null\n} (resolver returned new addresses)" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317700895Z" level=info msg="[core] [Channel #4] Channel switches to new LB policy \"pick_first\"" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317750973Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel created" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317794393Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to CONNECTING" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317867753Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel picks a new address \"/run/containerd/containerd.sock\" to connect" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.317908973Z" level=info msg="[core] [Channel #4] Channel Connectivity change to CONNECTING" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.318170773Z" level=info msg="[core] [Channel #4 SubChannel #5] Subchannel Connectivity change to READY" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.318238674Z" level=info msg="[core] [Channel #4] Channel Connectivity change to READY" module=grpc
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.318683224Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.326002821Z" level=info msg="Loading containers: start."
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.402284497Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.434330009Z" level=info msg="Loading containers: done."
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.442374964Z" level=info msg="Docker daemon" commit=bc3805a graphdriver=overlay2 version=23.0.1
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.442442869Z" level=info msg="Daemon has completed initialization"
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.462200298Z" level=info msg="[core] [Server #7] Server created" module=grpc
	Feb 23 20:57:09 multinode-899000 systemd[1]: Started Docker Application Container Engine.
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.466131795Z" level=info msg="API listen on [::]:2376"
	Feb 23 20:57:09 multinode-899000 dockerd[831]: time="2023-02-23T20:57:09.472601456Z" level=info msg="API listen on /var/run/docker.sock"
	Feb 23 20:57:50 multinode-899000 dockerd[831]: time="2023-02-23T20:57:50.560555858Z" level=info msg="ignoring event" container=6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 20:57:50 multinode-899000 dockerd[831]: time="2023-02-23T20:57:50.671502523Z" level=info msg="ignoring event" container=94788107a1e93da48536e32619b66fa9469e39a448fe8c3b0b247522d98cd443 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 20:57:50 multinode-899000 dockerd[831]: time="2023-02-23T20:57:50.786982860Z" level=info msg="ignoring event" container=2dbb1ff5944ec88f0c4829cd85418f0b56c5be224ce4e787b39d286e88707372 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Feb 23 20:57:50 multinode-899000 dockerd[831]: time="2023-02-23T20:57:50.874477341Z" level=info msg="ignoring event" container=4f5a4c753a363cbe7fe0e463e5f59c0f384563f5ecb47b2847d94f12c34d7324 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	5525218a9e92a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   9 seconds ago        Running             busybox                   0                   e8013f02ecb87
	76bce82b7d450       5185b96f0becf                                                                                         48 seconds ago       Running             coredns                   1                   03e8a7447b139
	5b4de5d50db8f       kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe              About a minute ago   Running             kindnet-cni               0                   d3e6dd0e53d06
	086926cf4bd23       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       0                   ec2713b77469d
	2dbb1ff5944ec       5185b96f0becf                                                                                         About a minute ago   Exited              coredns                   0                   4f5a4c753a363
	730147186f0db       46a6bb3c77ce0                                                                                         About a minute ago   Running             kube-proxy                0                   102c80b0fd0ca
	4a8468b488876       fce326961ae2d                                                                                         About a minute ago   Running             etcd                      0                   2711c694901fd
	db112877a70a1       e9c08e11b07f6                                                                                         About a minute ago   Running             kube-controller-manager   0                   58092128f89d6
	ad8fcd7a26ca5       deb04688c4a35                                                                                         About a minute ago   Running             kube-apiserver            0                   5a80b48095304
	8d0f71f04e8a7       655493523f607                                                                                         About a minute ago   Running             kube-scheduler            0                   921320e519fa2
	
	* 
	* ==> coredns [2dbb1ff5944e] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/errors: 2 5394272695607976485.1833153103134811429. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 5394272695607976485.1833153103134811429. HINFO: dial udp 192.168.65.2:53: connect: network is unreachable
	
	* 
	* ==> coredns [76bce82b7d45] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 8846d9ca81164c00fa03e78dfcf1a6846552cc49335bc010218794b8cfaf537759aa4b596e7dc20c0f618e8eb07603c0139662b99dfa3de45b176fbe7fb57ce1
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] 127.0.0.1:41373 - 46368 "HINFO IN 5785576392753736130.8609393905576695230. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015578364s
	[INFO] 10.244.0.3:41356 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000173355s
	[INFO] 10.244.0.3:57235 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.050066565s
	[INFO] 10.244.0.3:42102 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.003603104s
	[INFO] 10.244.0.3:45013 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.012998473s
	[INFO] 10.244.0.3:59007 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125346s
	[INFO] 10.244.0.3:34307 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005911723s
	[INFO] 10.244.0.3:37248 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000236478s
	[INFO] 10.244.0.3:51055 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104314s
	[INFO] 10.244.0.3:44170 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.004655167s
	[INFO] 10.244.0.3:55871 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000108021s
	[INFO] 10.244.0.3:41998 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117292s
	[INFO] 10.244.0.3:39672 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000121114s
	[INFO] 10.244.0.3:48038 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000153102s
	[INFO] 10.244.0.3:37055 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000087004s
	[INFO] 10.244.0.3:46193 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093886s
	[INFO] 10.244.0.3:40304 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000074182s
	[INFO] 10.244.0.3:53194 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139298s
	[INFO] 10.244.0.3:53522 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000116105s
	[INFO] 10.244.0.3:60179 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000109907s
	[INFO] 10.244.0.3:48088 - 5 "PTR IN 2.65.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000059643s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-899000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7816f70daabe48630c945a757f21bf8d759fce7d
	                    minikube.k8s.io/name=multinode-899000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T12_57_23_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 20:57:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899000
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 20:58:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 20:57:53 +0000   Thu, 23 Feb 2023 20:57:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 20:57:53 +0000   Thu, 23 Feb 2023 20:57:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 20:57:53 +0000   Thu, 23 Feb 2023 20:57:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 20:57:53 +0000   Thu, 23 Feb 2023 20:57:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-899000
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    ca13ab7a-8d3b-40f9-b8eb-210af75da760
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-c2dqh                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 coredns-787d4945fb-255qk                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     64s
	  kube-system                 etcd-multinode-899000                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         76s
	  kube-system                 kindnet-gvns6                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      64s
	  kube-system                 kube-apiserver-multinode-899000             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 kube-controller-manager-multinode-899000    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 kube-proxy-w885m                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         64s
	  kube-system                 kube-scheduler-multinode-899000             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         76s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         63s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 62s                kube-proxy       
	  Normal  NodeHasSufficientMemory  86s (x5 over 86s)  kubelet          Node multinode-899000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x3 over 86s)  kubelet          Node multinode-899000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x3 over 86s)  kubelet          Node multinode-899000 status is now: NodeHasSufficientPID
	  Normal  Starting                 77s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  77s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  76s                kubelet          Node multinode-899000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s                kubelet          Node multinode-899000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s                kubelet          Node multinode-899000 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             76s                kubelet          Node multinode-899000 status is now: NodeNotReady
	  Normal  NodeReady                66s                kubelet          Node multinode-899000 status is now: NodeReady
	  Normal  RegisteredNode           65s                node-controller  Node multinode-899000 event: Registered Node multinode-899000 in Controller
	
	
	Name:               multinode-899000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-899000-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 20:58:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-899000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 20:58:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 20:58:22 +0000   Thu, 23 Feb 2023 20:58:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 20:58:22 +0000   Thu, 23 Feb 2023 20:58:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 20:58:22 +0000   Thu, 23 Feb 2023 20:58:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 20:58:22 +0000   Thu, 23 Feb 2023 20:58:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-899000-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-2Mi:      0
	  memory:             6085660Ki
	  pods:               110
	System Info:
	  Machine ID:                 d2c8c90d305d4c21867ffd1b1748456b
	  System UUID:                d2c8c90d305d4c21867ffd1b1748456b
	  Boot ID:                    ca13ab7a-8d3b-40f9-b8eb-210af75da760
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://23.0.1
	  Kubelet Version:            v1.26.1
	  Kube-Proxy Version:         v1.26.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-6b86dd6d48-8hfr6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  kube-system                 kindnet-xk4c6               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      20s
	  kube-system                 kube-proxy-s4pvs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  RegisteredNode           20s                node-controller  Node multinode-899000-m02 event: Registered Node multinode-899000-m02 in Controller
	  Normal  NodeHasSufficientMemory  20s (x8 over 32s)  kubelet          Node multinode-899000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20s (x8 over 32s)  kubelet          Node multinode-899000-m02 status is now: NodeHasNoDiskPressure
	
	* 
	* ==> dmesg <==
	* [  +0.000067] FS-Cache: O-key=[8] '74557e0500000000'
	[  +0.000037] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.000059] FS-Cache: N-cookie d=00000000df813808{9p.inode} n=0000000066e9be13
	[  +0.000143] FS-Cache: N-key=[8] '74557e0500000000'
	[  +0.003159] FS-Cache: Duplicate cookie detected
	[  +0.000048] FS-Cache: O-cookie c=00000007 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000046] FS-Cache: O-cookie d=00000000df813808{9p.inode} n=00000000f5fd9442
	[  +0.000058] FS-Cache: O-key=[8] '74557e0500000000'
	[  +0.000052] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.000034] FS-Cache: N-cookie d=00000000df813808{9p.inode} n=000000002aa4df33
	[  +0.000075] FS-Cache: N-key=[8] '74557e0500000000'
	[  +3.589013] FS-Cache: Duplicate cookie detected
	[  +0.000046] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.000033] FS-Cache: O-cookie d=00000000df813808{9p.inode} n=0000000081eaffce
	[  +0.000086] FS-Cache: O-key=[8] '73557e0500000000'
	[  +0.000053] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000065] FS-Cache: N-cookie d=00000000df813808{9p.inode} n=00000000555cd28a
	[  +0.000052] FS-Cache: N-key=[8] '73557e0500000000'
	[  +0.394725] FS-Cache: Duplicate cookie detected
	[  +0.000039] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.000057] FS-Cache: O-cookie d=00000000df813808{9p.inode} n=000000009e5b0d36
	[  +0.000055] FS-Cache: O-key=[8] '85557e0500000000'
	[  +0.000046] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.000058] FS-Cache: N-cookie d=00000000df813808{9p.inode} n=000000002aa4df33
	[  +0.000072] FS-Cache: N-key=[8] '85557e0500000000'
	
	* 
	* ==> etcd [4a8468b48887] <==
	* {"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:687","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:762","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T20:57:18.160Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 1"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 1"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 2"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 2"}
	{"level":"info","ts":"2023-02-23T20:57:18.954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2023-02-23T20:57:18.955Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T20:57:18.956Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-899000 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T20:57:18.956Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T20:57:18.956Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T20:57:18.956Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T20:57:18.957Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T20:57:18.957Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T20:57:18.957Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T20:57:18.957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-02-23T20:57:18.958Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T20:57:18.958Z","caller":"embed/serve.go:198","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2023-02-23T20:57:58.056Z","caller":"traceutil/trace.go:171","msg":"trace[1630010306] transaction","detail":"{read_only:false; response_revision:442; number_of_response:1; }","duration":"156.95355ms","start":"2023-02-23T20:57:57.899Z","end":"2023-02-23T20:57:58.056Z","steps":["trace[1630010306] 'process raft request'  (duration: 156.824827ms)"],"step_count":1}
	{"level":"info","ts":"2023-02-23T20:58:00.285Z","caller":"traceutil/trace.go:171","msg":"trace[858322572] transaction","detail":"{read_only:false; response_revision:443; number_of_response:1; }","duration":"215.454279ms","start":"2023-02-23T20:58:00.069Z","end":"2023-02-23T20:58:00.285Z","steps":["trace[858322572] 'process raft request'  (duration: 215.306045ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  20:58:39 up 26 min,  0 users,  load average: 0.66, 1.02, 0.84
	Linux multinode-899000 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kindnet [5b4de5d50db8] <==
	* I0223 20:57:40.236507       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0223 20:57:40.236581       1 main.go:107] hostIP = 192.168.58.2
	podIP = 192.168.58.2
	I0223 20:57:40.236778       1 main.go:116] setting mtu 1500 for CNI 
	I0223 20:57:40.236798       1 main.go:146] kindnetd IP family: "ipv4"
	I0223 20:57:40.236815       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0223 20:57:40.637350       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:57:40.733613       1 main.go:227] handling current node
	I0223 20:57:50.740160       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:57:50.740212       1 main.go:227] handling current node
	I0223 20:58:00.752722       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:58:00.752762       1 main.go:227] handling current node
	I0223 20:58:10.756060       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:58:10.756102       1 main.go:227] handling current node
	I0223 20:58:20.767955       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:58:20.767994       1 main.go:227] handling current node
	I0223 20:58:20.768002       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 20:58:20.768009       1 main.go:250] Node multinode-899000-m02 has CIDR [10.244.1.0/24] 
	I0223 20:58:20.768106       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.58.3 Flags: [] Table: 0} 
	I0223 20:58:30.772237       1 main.go:223] Handling node with IPs: map[192.168.58.2:{}]
	I0223 20:58:30.772313       1 main.go:227] handling current node
	I0223 20:58:30.772321       1 main.go:223] Handling node with IPs: map[192.168.58.3:{}]
	I0223 20:58:30.772325       1 main.go:250] Node multinode-899000-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [ad8fcd7a26ca] <==
	* I0223 20:57:20.086891       1 cache.go:39] Caches are synced for autoregister controller
	I0223 20:57:20.086901       1 apf_controller.go:366] Running API Priority and Fairness config worker
	I0223 20:57:20.086910       1 apf_controller.go:369] Running API Priority and Fairness periodic rebalancing process
	I0223 20:57:20.087128       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 20:57:20.087177       1 shared_informer.go:280] Caches are synced for crd-autoregister
	I0223 20:57:20.087425       1 shared_informer.go:280] Caches are synced for cluster_authentication_trust_controller
	I0223 20:57:20.133257       1 shared_informer.go:280] Caches are synced for node_authorizer
	I0223 20:57:20.133259       1 shared_informer.go:280] Caches are synced for configmaps
	I0223 20:57:20.143946       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 20:57:20.814463       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 20:57:20.991626       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0223 20:57:20.994342       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0223 20:57:20.994379       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 20:57:21.438083       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 20:57:21.465950       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 20:57:21.562181       1 alloc.go:327] "allocated clusterIPs" service="default/kubernetes" clusterIPs=map[IPv4:10.96.0.1]
	W0223 20:57:21.567673       1 lease.go:251] Resetting endpoints for master service "kubernetes" to [192.168.58.2]
	I0223 20:57:21.568688       1 controller.go:615] quota admission added evaluator for: endpoints
	I0223 20:57:21.571938       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 20:57:22.052799       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0223 20:57:22.805467       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0223 20:57:22.812872       1 alloc.go:327] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs=map[IPv4:10.96.0.10]
	I0223 20:57:22.819963       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0223 20:57:35.741904       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0223 20:57:35.842422       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [db112877a70a] <==
	* I0223 20:57:35.002353       1 shared_informer.go:280] Caches are synced for disruption
	I0223 20:57:35.060218       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 20:57:35.074779       1 shared_informer.go:280] Caches are synced for stateful set
	I0223 20:57:35.090906       1 shared_informer.go:280] Caches are synced for daemon sets
	I0223 20:57:35.143634       1 shared_informer.go:280] Caches are synced for resource quota
	I0223 20:57:35.458522       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 20:57:35.539480       1 shared_informer.go:280] Caches are synced for garbage collector
	I0223 20:57:35.539520       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 20:57:35.746743       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-787d4945fb to 2"
	I0223 20:57:35.848511       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-w885m"
	I0223 20:57:35.850369       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gvns6"
	I0223 20:57:35.944023       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-255qk"
	I0223 20:57:35.948150       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-787d4945fb-fllr8"
	I0223 20:57:36.066614       1 event.go:294] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-787d4945fb to 1 from 2"
	I0223 20:57:36.072457       1 event.go:294] "Event occurred" object="kube-system/coredns-787d4945fb" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-787d4945fb-fllr8"
	W0223 20:58:19.875734       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-899000-m02" does not exist
	I0223 20:58:19.879633       1 range_allocator.go:372] Set node multinode-899000-m02 PodCIDR to [10.244.1.0/24]
	I0223 20:58:19.882750       1 event.go:294] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-s4pvs"
	I0223 20:58:19.885952       1 event.go:294] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xk4c6"
	W0223 20:58:19.891471       1 node_lifecycle_controller.go:1053] Missing timestamp for Node multinode-899000-m02. Assuming now as a timestamp.
	I0223 20:58:19.891626       1 event.go:294] "Event occurred" object="multinode-899000-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-899000-m02 event: Registered Node multinode-899000-m02 in Controller"
	W0223 20:58:22.124444       1 topologycache.go:232] Can't get CPU or zone information for multinode-899000-m02 node
	I0223 20:58:28.058105       1 event.go:294] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-6b86dd6d48 to 2"
	I0223 20:58:28.103984       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-8hfr6"
	I0223 20:58:28.114204       1 event.go:294] "Event occurred" object="default/busybox-6b86dd6d48" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-6b86dd6d48-c2dqh"
	
	* 
	* ==> kube-proxy [730147186f0d] <==
	* I0223 20:57:36.983626       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I0223 20:57:36.983715       1 server_others.go:109] "Detected node IP" address="192.168.58.2"
	I0223 20:57:36.983760       1 server_others.go:535] "Using iptables proxy"
	I0223 20:57:37.016107       1 server_others.go:176] "Using iptables Proxier"
	I0223 20:57:37.016152       1 server_others.go:183] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I0223 20:57:37.016159       1 server_others.go:184] "Creating dualStackProxier for iptables"
	I0223 20:57:37.016175       1 server_others.go:465] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I0223 20:57:37.016193       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0223 20:57:37.016844       1 server.go:655] "Version info" version="v1.26.1"
	I0223 20:57:37.016879       1 server.go:657] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 20:57:37.017415       1 config.go:226] "Starting endpoint slice config controller"
	I0223 20:57:37.017422       1 config.go:317] "Starting service config controller"
	I0223 20:57:37.017480       1 shared_informer.go:273] Waiting for caches to sync for service config
	I0223 20:57:37.017481       1 shared_informer.go:273] Waiting for caches to sync for endpoint slice config
	I0223 20:57:37.033506       1 config.go:444] "Starting node config controller"
	I0223 20:57:37.033559       1 shared_informer.go:273] Waiting for caches to sync for node config
	I0223 20:57:37.118554       1 shared_informer.go:280] Caches are synced for service config
	I0223 20:57:37.118595       1 shared_informer.go:280] Caches are synced for endpoint slice config
	I0223 20:57:37.133594       1 shared_informer.go:280] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [8d0f71f04e8a] <==
	* W0223 20:57:20.051035       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 20:57:20.051048       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 20:57:20.051215       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 20:57:20.051242       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.051255       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.051259       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0223 20:57:20.051370       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0223 20:57:20.051381       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0223 20:57:20.860893       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.860951       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0223 20:57:20.886024       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 20:57:20.886111       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 20:57:20.943345       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0223 20:57:20.943391       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0223 20:57:20.943422       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.943434       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0223 20:57:20.973162       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0223 20:57:20.973225       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0223 20:57:21.134932       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0223 20:57:21.134993       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0223 20:57:21.234479       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 20:57:21.234565       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 20:57:21.238586       1 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 20:57:21.238626       1 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0223 20:57:21.647659       1 shared_informer.go:280] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Thu 2023-02-23 20:57:05 UTC, end at Thu 2023-02-23 20:58:40 UTC. --
	Feb 23 20:57:37 multinode-899000 kubelet[2151]: I0223 20:57:37.576720    2151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94788107a1e93da48536e32619b66fa9469e39a448fe8c3b0b247522d98cd443"
	Feb 23 20:57:37 multinode-899000 kubelet[2151]: I0223 20:57:37.859493    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-fllr8" podStartSLOduration=2.859454877 pod.CreationTimestamp="2023-02-23 20:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:37.858705972 +0000 UTC m=+15.067993982" watchObservedRunningTime="2023-02-23 20:57:37.859454877 +0000 UTC m=+15.068742881"
	Feb 23 20:57:38 multinode-899000 kubelet[2151]: I0223 20:57:38.259367    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w885m" podStartSLOduration=3.2593411420000002 pod.CreationTimestamp="2023-02-23 20:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:38.259216574 +0000 UTC m=+15.468504578" watchObservedRunningTime="2023-02-23 20:57:38.259341142 +0000 UTC m=+15.468629146"
	Feb 23 20:57:38 multinode-899000 kubelet[2151]: I0223 20:57:38.659955    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.659930838 pod.CreationTimestamp="2023-02-23 20:57:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:38.659850511 +0000 UTC m=+15.869138519" watchObservedRunningTime="2023-02-23 20:57:38.659930838 +0000 UTC m=+15.869218841"
	Feb 23 20:57:40 multinode-899000 kubelet[2151]: I0223 20:57:40.654967    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-255qk" podStartSLOduration=5.654940557 pod.CreationTimestamp="2023-02-23 20:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:39.064113915 +0000 UTC m=+16.273401920" watchObservedRunningTime="2023-02-23 20:57:40.654940557 +0000 UTC m=+17.864228561"
	Feb 23 20:57:40 multinode-899000 kubelet[2151]: I0223 20:57:40.655104    2151 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gvns6" podStartSLOduration=-9.223372031199686e+09 pod.CreationTimestamp="2023-02-23 20:57:35 +0000 UTC" firstStartedPulling="2023-02-23 20:57:36.991473098 +0000 UTC m=+14.200761098" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-02-23 20:57:40.65485295 +0000 UTC m=+17.864140959" watchObservedRunningTime="2023-02-23 20:57:40.655089167 +0000 UTC m=+17.864377171"
	Feb 23 20:57:43 multinode-899000 kubelet[2151]: I0223 20:57:43.559775    2151 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Feb 23 20:57:43 multinode-899000 kubelet[2151]: I0223 20:57:43.560481    2151 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.746650    2151 scope.go:115] "RemoveContainer" containerID="6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.756619    2151 scope.go:115] "RemoveContainer" containerID="6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: E0223 20:57:50.757576    2151 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7" containerID="6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.757630    2151 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:docker ID:6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7} err="failed to get container status \"6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7\": rpc error: code = Unknown desc = Error: No such container: 6a2be21b93531149ffcb58947655477919a621aba389f83e75ed253fbe96e7b7"
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.886272    2151 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64ntk\" (UniqueName: \"kubernetes.io/projected/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-kube-api-access-64ntk\") pod \"9f55cbe6-d30b-4575-96d6-0d79d5e6a97b\" (UID: \"9f55cbe6-d30b-4575-96d6-0d79d5e6a97b\") "
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.886337    2151 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-config-volume\") pod \"9f55cbe6-d30b-4575-96d6-0d79d5e6a97b\" (UID: \"9f55cbe6-d30b-4575-96d6-0d79d5e6a97b\") "
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: W0223 20:57:50.886465    2151 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.886580    2151 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-config-volume" (OuterVolumeSpecName: "config-volume") pod "9f55cbe6-d30b-4575-96d6-0d79d5e6a97b" (UID: "9f55cbe6-d30b-4575-96d6-0d79d5e6a97b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.888384    2151 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-kube-api-access-64ntk" (OuterVolumeSpecName: "kube-api-access-64ntk") pod "9f55cbe6-d30b-4575-96d6-0d79d5e6a97b" (UID: "9f55cbe6-d30b-4575-96d6-0d79d5e6a97b"). InnerVolumeSpecName "kube-api-access-64ntk". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.986964    2151 reconciler_common.go:295] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-config-volume\") on node \"multinode-899000\" DevicePath \"\""
	Feb 23 20:57:50 multinode-899000 kubelet[2151]: I0223 20:57:50.987016    2151 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-64ntk\" (UniqueName: \"kubernetes.io/projected/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b-kube-api-access-64ntk\") on node \"multinode-899000\" DevicePath \"\""
	Feb 23 20:57:51 multinode-899000 kubelet[2151]: I0223 20:57:51.076428    2151 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=9f55cbe6-d30b-4575-96d6-0d79d5e6a97b path="/var/lib/kubelet/pods/9f55cbe6-d30b-4575-96d6-0d79d5e6a97b/volumes"
	Feb 23 20:57:51 multinode-899000 kubelet[2151]: I0223 20:57:51.765889    2151 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f5a4c753a363cbe7fe0e463e5f59c0f384563f5ecb47b2847d94f12c34d7324"
	Feb 23 20:58:28 multinode-899000 kubelet[2151]: I0223 20:58:28.118887    2151 topology_manager.go:210] "Topology Admit Handler"
	Feb 23 20:58:28 multinode-899000 kubelet[2151]: E0223 20:58:28.119228    2151 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9f55cbe6-d30b-4575-96d6-0d79d5e6a97b" containerName="coredns"
	Feb 23 20:58:28 multinode-899000 kubelet[2151]: I0223 20:58:28.119342    2151 memory_manager.go:346] "RemoveStaleState removing state" podUID="9f55cbe6-d30b-4575-96d6-0d79d5e6a97b" containerName="coredns"
	Feb 23 20:58:28 multinode-899000 kubelet[2151]: I0223 20:58:28.258167    2151 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkv5p\" (UniqueName: \"kubernetes.io/projected/c0b18eec-d8fe-4ce9-bc1f-74eae6a40582-kube-api-access-hkv5p\") pod \"busybox-6b86dd6d48-c2dqh\" (UID: \"c0b18eec-d8fe-4ce9-bc1f-74eae6a40582\") " pod="default/busybox-6b86dd6d48-c2dqh"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-899000 -n multinode-899000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-899000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (4.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.72s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.687133155.exe start -p running-upgrade-312000 --memory=2200 --vm-driver=docker 
E0223 13:11:46.553017    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.687133155.exe start -p running-upgrade-312000 --memory=2200 --vm-driver=docker : exit status 70 (1m4.427743315s)

                                                
                                                
-- stdout --
	! [running-upgrade-312000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig2244926791
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 21:12:00.401417974 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-312000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 21:12:19.972417788 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-312000", then "minikube start -p running-upgrade-312000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.29.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.29.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.59 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.28 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 9.12 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 21.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 26.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.00 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 47.37 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 61.87 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 69.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 76.97 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 84.51 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 91.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 93.12 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 101.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 109.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 124.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 133.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 141.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 154.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 162.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 170.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 174.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 194.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 220.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 227.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 243.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 276.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 284.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 293.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 299.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 307.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 312.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 321.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 326.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 334.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 336.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 344.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 353.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 359.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 370.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 380.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 385.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 392.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 411.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 418.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 436.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 445.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 451.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-a
md64.tar.lz4: 467.88 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 477.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.52 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 522.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 529.77 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 539.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 21:12:19.972417788 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.687133155.exe start -p running-upgrade-312000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.687133155.exe start -p running-upgrade-312000 --memory=2200 --vm-driver=docker : exit status 70 (4.319360521s)

                                                
                                                
-- stdout --
	* [running-upgrade-312000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1130237614
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-312000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:128: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.687133155.exe start -p running-upgrade-312000 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:128: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.687133155.exe start -p running-upgrade-312000 --memory=2200 --vm-driver=docker : exit status 70 (4.45122931s)

                                                
                                                
-- stdout --
	* [running-upgrade-312000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1877334650
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-312000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:134: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-02-23 13:12:33.639107 -0800 PST m=+2381.626783053
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-312000
helpers_test.go:235: (dbg) docker inspect running-upgrade-312000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9c3267222d8e112cb834983be4badd2807ae100dfda586d1fd159446aa3e397d",
	        "Created": "2023-02-23T21:12:08.586049024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 172460,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T21:12:08.814617871Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/9c3267222d8e112cb834983be4badd2807ae100dfda586d1fd159446aa3e397d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9c3267222d8e112cb834983be4badd2807ae100dfda586d1fd159446aa3e397d/hostname",
	        "HostsPath": "/var/lib/docker/containers/9c3267222d8e112cb834983be4badd2807ae100dfda586d1fd159446aa3e397d/hosts",
	        "LogPath": "/var/lib/docker/containers/9c3267222d8e112cb834983be4badd2807ae100dfda586d1fd159446aa3e397d/9c3267222d8e112cb834983be4badd2807ae100dfda586d1fd159446aa3e397d-json.log",
	        "Name": "/running-upgrade-312000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "running-upgrade-312000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/c31ebc73ebecddfb2c6a918dd610bece26677be13b2d13c57267bea023a3ada9-init/diff:/var/lib/docker/overlay2/62b3a4eb3f919655fd48b775cdd122f8f758dd355101f7ae1f917c82acd0cfd5/diff:/var/lib/docker/overlay2/d76ebdb5ff84afcb7a45bbd90dfd141b8d212bd45d69899937230561fbd23a21/diff:/var/lib/docker/overlay2/10094c0c47905e41e12e160eea6cdb077e0ba5917aac03db3a6da58e38b1a30b/diff:/var/lib/docker/overlay2/9e22cb9759df443caa45b9c262bb33a3b61cae6d29f67c9deb4fee11cad46536/diff:/var/lib/docker/overlay2/82f0d1a16c7c97c68b32acdd08f436fa6c2de555d65ff82863fcb08991471f7c/diff:/var/lib/docker/overlay2/6e62aca1d088bbef7510445a394aee7b869c41e827ae7927a8181330f5809d32/diff:/var/lib/docker/overlay2/55fc9e0f1dd06920593dea87ae2cdd9b9d7e751ea2d3c3ba5360e67721cd955e/diff:/var/lib/docker/overlay2/616d26d496c2a8a0b038b552fb5a9ada5602ee8b665fd890af2aa70f844758a9/diff:/var/lib/docker/overlay2/236cdd6839a81e88d64f65953490d6e48421415e8629455878d87fc8e90fd78b/diff:/var/lib/docker/overlay2/31a751
1998e2b9d3fda5edb408e36f62fc4c3ce83aadaa8e8ba1a1c0ba2ae462/diff:/var/lib/docker/overlay2/eac56b20e5dfc60fdee9758533190e818a798fe53b14d04970a7eba485a16bdb/diff:/var/lib/docker/overlay2/1c3cbc661d482443e57a595a048facfddabd2d1ebfd9b6e6ff2cf37eaba8ea05/diff:/var/lib/docker/overlay2/d99a980e408533c076b9d911968a09807d3b6b758b06031a428aab7c2f57bf98/diff:/var/lib/docker/overlay2/f75caf33199df34c923189d94fc59f364bae60bd2dfe7a59dc0d79e2c1ea0b7e/diff:/var/lib/docker/overlay2/eca12677cfc62357835eb464158b800120ce690e882e082d19d14fab2090b913/diff:/var/lib/docker/overlay2/a60aa76e326bd91f8c34eabdf426156c3d9416d7f9bba356ae6e5e8da5541502/diff:/var/lib/docker/overlay2/46c1d1da25616201dc3adf027761bd60ec98232390ae009d9f666c8b73056bf6/diff:/var/lib/docker/overlay2/09e174d148720337e67c054dc8484fec9719f46234296ec15f197f9bc26d9824/diff:/var/lib/docker/overlay2/2334102829d1acb0e3109529ede7b189866e1426779628291aab9854b62485bb/diff:/var/lib/docker/overlay2/f7b4bd0614824bde8442a434d51b118705d087428e9c8a083c66af4840090838/diff:/var/lib/d
ocker/overlay2/4cffb4ff0f7618836ffbdd99b0084ba4b8575c4e4dd3fa19a5d647d205d8b7f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c31ebc73ebecddfb2c6a918dd610bece26677be13b2d13c57267bea023a3ada9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c31ebc73ebecddfb2c6a918dd610bece26677be13b2d13c57267bea023a3ada9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c31ebc73ebecddfb2c6a918dd610bece26677be13b2d13c57267bea023a3ada9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-312000",
	                "Source": "/var/lib/docker/volumes/running-upgrade-312000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-312000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-312000",
	                "name.minikube.sigs.k8s.io": "running-upgrade-312000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "47521be8daa79b64ee8e30f87ac82ec543a3cac7d1c69bb1fdee66dbaa1c7660",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52399"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52400"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52401"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/47521be8daa7",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "069f58d403350b57c2767c948014311f9e4d0f46bafb2b1f28002365d9349b19",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "6da2154054de0f6da1ccf786be7965bbad7953e93dc14b36c61b0f7212051f60",
	                    "EndpointID": "069f58d403350b57c2767c948014311f9e4d0f46bafb2b1f28002365d9349b19",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-312000 -n running-upgrade-312000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-312000 -n running-upgrade-312000: exit status 6 (375.605018ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:12:34.062640   12548 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-312000" does not appear in /Users/jenkins/minikube-integration/15909-825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-312000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-312000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-312000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-312000: (2.309641228s)
--- FAIL: TestRunningBinaryUpgrade (82.72s)

                                                
                                    
x
+
TestKubernetesUpgrade (55.12s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-903000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
version_upgrade_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-903000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 80 (38.489669575s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-903000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-903000 in cluster kubernetes-upgrade-903000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "kubernetes-upgrade-903000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:15:52.805572   13332 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:15:52.805777   13332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:15:52.805782   13332 out.go:309] Setting ErrFile to fd 2...
	I0223 13:15:52.805806   13332 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:15:52.805930   13332 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:15:52.807243   13332 out.go:303] Setting JSON to false
	I0223 13:15:52.826143   13332 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2727,"bootTime":1677184225,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:15:52.826243   13332 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:15:52.848732   13332 out.go:177] * [kubernetes-upgrade-903000] minikube v1.29.0 on Darwin 13.2
	I0223 13:15:52.891740   13332 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:15:52.891691   13332 notify.go:220] Checking for updates...
	I0223 13:15:52.935428   13332 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:15:52.957640   13332 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:15:52.979584   13332 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:15:53.001725   13332 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:15:53.023613   13332 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:15:53.045281   13332 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:15:53.045450   13332 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:15:53.045525   13332 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:15:53.106622   13332 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:15:53.106769   13332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:15:53.250335   13332 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:15:53.156485764 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:15:53.293839   13332 out.go:177] * Using the docker driver based on user configuration
	I0223 13:15:53.314739   13332 start.go:296] selected driver: docker
	I0223 13:15:53.314764   13332 start.go:857] validating driver "docker" against <nil>
	I0223 13:15:53.314782   13332 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:15:53.318675   13332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:15:53.475946   13332 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:15:53.384452777 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:15:53.476071   13332 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:15:53.476239   13332 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 13:15:53.498054   13332 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:15:53.519423   13332 cni.go:84] Creating CNI manager for ""
	I0223 13:15:53.519466   13332 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 13:15:53.519483   13332 start_flags.go:319] config:
	{Name:kubernetes-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-903000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:15:53.561334   13332 out.go:177] * Starting control plane node kubernetes-upgrade-903000 in cluster kubernetes-upgrade-903000
	I0223 13:15:53.582347   13332 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:15:53.603711   13332 out.go:177] * Pulling base image ...
	I0223 13:15:53.645894   13332 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 13:15:53.645948   13332 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:15:53.645984   13332 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 13:15:53.646010   13332 cache.go:57] Caching tarball of preloaded images
	I0223 13:15:53.646229   13332 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:15:53.646249   13332 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 13:15:53.647289   13332 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/kubernetes-upgrade-903000/config.json ...
	I0223 13:15:53.647463   13332 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/kubernetes-upgrade-903000/config.json: {Name:mk046a994dd7dcec9a8fd780698a77b9d087d0c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:15:53.702740   13332 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:15:53.702758   13332 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:15:53.702791   13332 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:15:53.702831   13332 start.go:364] acquiring machines lock for kubernetes-upgrade-903000: {Name:mk85dbca8c184358da767bff1f95d7fddfd56195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:15:53.702982   13332 start.go:368] acquired machines lock for "kubernetes-upgrade-903000" in 139.651µs
	I0223 13:15:53.703017   13332 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-903000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-903000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:15:53.703071   13332 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:15:53.746565   13332 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:15:53.746968   13332 start.go:159] libmachine.API.Create for "kubernetes-upgrade-903000" (driver="docker")
	I0223 13:15:53.747013   13332 client.go:168] LocalClient.Create starting
	I0223 13:15:53.747220   13332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:15:53.747298   13332 main.go:141] libmachine: Decoding PEM data...
	I0223 13:15:53.747340   13332 main.go:141] libmachine: Parsing certificate...
	I0223 13:15:53.747456   13332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:15:53.747508   13332 main.go:141] libmachine: Decoding PEM data...
	I0223 13:15:53.747537   13332 main.go:141] libmachine: Parsing certificate...
	I0223 13:15:53.748427   13332 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-903000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:15:53.803868   13332 cli_runner.go:211] docker network inspect kubernetes-upgrade-903000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:15:53.803979   13332 network_create.go:281] running [docker network inspect kubernetes-upgrade-903000] to gather additional debugging logs...
	I0223 13:15:53.803997   13332 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-903000
	W0223 13:15:53.858440   13332 cli_runner.go:211] docker network inspect kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:15:53.858464   13332 network_create.go:284] error running [docker network inspect kubernetes-upgrade-903000]: docker network inspect kubernetes-upgrade-903000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-903000
	I0223 13:15:53.858476   13332 network_create.go:286] output of [docker network inspect kubernetes-upgrade-903000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-903000
	
	** /stderr **
	I0223 13:15:53.858564   13332 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:15:53.914320   13332 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:15:53.914651   13332 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000dbc160}
	I0223 13:15:53.914664   13332 network_create.go:123] attempt to create docker network kubernetes-upgrade-903000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:15:53.914728   13332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 kubernetes-upgrade-903000
	W0223 13:15:53.968906   13332 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 kubernetes-upgrade-903000 returned with exit code 1
	W0223 13:15:53.968934   13332 network_create.go:148] failed to create docker network kubernetes-upgrade-903000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 kubernetes-upgrade-903000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:15:53.968947   13332 network_create.go:115] failed to create docker network kubernetes-upgrade-903000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:15:53.970273   13332 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:15:53.970598   13332 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e411e0}
	I0223 13:15:53.970608   13332 network_create.go:123] attempt to create docker network kubernetes-upgrade-903000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:15:53.970676   13332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 kubernetes-upgrade-903000
	I0223 13:15:54.057019   13332 network_create.go:107] docker network kubernetes-upgrade-903000 192.168.67.0/24 created
	I0223 13:15:54.057060   13332 kic.go:117] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-903000" container
	I0223 13:15:54.057176   13332 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:15:54.113750   13332 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-903000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:15:54.168794   13332 oci.go:103] Successfully created a docker volume kubernetes-upgrade-903000
	I0223 13:15:54.168932   13332 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:15:54.399944   13332 cli_runner.go:211] docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:15:54.399993   13332 client.go:171] LocalClient.Create took 652.97016ms
	I0223 13:15:56.401133   13332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:15:56.401267   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:15:56.458204   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:15:56.458327   13332 retry.go:31] will retry after 190.480988ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:15:56.651254   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:15:56.711057   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:15:56.711141   13332 retry.go:31] will retry after 549.335863ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:15:57.261181   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:15:57.319444   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:15:57.319533   13332 retry.go:31] will retry after 688.941195ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:15:58.010077   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:15:58.067648   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	W0223 13:15:58.067757   13332 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	
	W0223 13:15:58.067802   13332 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:15:58.067860   13332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:15:58.067907   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:15:58.121512   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:15:58.121600   13332 retry.go:31] will retry after 279.725287ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:15:58.403694   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:15:58.459959   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:15:58.460054   13332 retry.go:31] will retry after 249.558781ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:15:58.711990   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:15:58.768945   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:15:58.769037   13332 retry.go:31] will retry after 685.617513ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:15:59.454895   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:15:59.509964   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	W0223 13:15:59.510058   13332 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	
	W0223 13:15:59.510073   13332 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:15:59.510078   13332 start.go:128] duration metric: createHost completed in 5.80699005s
	I0223 13:15:59.510085   13332 start.go:83] releasing machines lock for "kubernetes-upgrade-903000", held for 5.807082233s
	W0223 13:15:59.510100   13332 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-903000 container: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:15:59.510536   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:15:59.564699   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:15:59.564748   13332 delete.go:82] Unable to get host status for kubernetes-upgrade-903000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	W0223 13:15:59.564874   13332 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-903000 container: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-903000 container: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:15:59.564882   13332 start.go:706] Will try again in 5 seconds ...
	I0223 13:16:04.565664   13332 start.go:364] acquiring machines lock for kubernetes-upgrade-903000: {Name:mk85dbca8c184358da767bff1f95d7fddfd56195 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:16:04.565838   13332 start.go:368] acquired machines lock for "kubernetes-upgrade-903000" in 136.431µs
	I0223 13:16:04.565888   13332 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:16:04.565904   13332 fix.go:55] fixHost starting: 
	I0223 13:16:04.566384   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:04.622336   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:04.622382   13332 fix.go:103] recreateIfNeeded on kubernetes-upgrade-903000: state= err=unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:04.622398   13332 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:16:04.644177   13332 out.go:177] * docker "kubernetes-upgrade-903000" container is missing, will recreate.
	I0223 13:16:04.664705   13332 delete.go:124] DEMOLISHING kubernetes-upgrade-903000 ...
	I0223 13:16:04.664879   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:04.719119   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	W0223 13:16:04.719168   13332 stop.go:75] unable to get state: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:04.719183   13332 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:04.719557   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:04.772806   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:04.772857   13332 delete.go:82] Unable to get host status for kubernetes-upgrade-903000, assuming it has already been deleted: state: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:04.772943   13332 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-903000
	W0223 13:16:04.826839   13332 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:04.826873   13332 kic.go:367] could not find the container kubernetes-upgrade-903000 to remove it. will try anyways
	I0223 13:16:04.826957   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:04.881081   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	W0223 13:16:04.881130   13332 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:04.881215   13332 cli_runner.go:164] Run: docker exec --privileged -t kubernetes-upgrade-903000 /bin/bash -c "sudo init 0"
	W0223 13:16:04.937303   13332 cli_runner.go:211] docker exec --privileged -t kubernetes-upgrade-903000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:16:04.937334   13332 oci.go:641] error shutdown kubernetes-upgrade-903000: docker exec --privileged -t kubernetes-upgrade-903000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:05.937483   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:05.991287   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:05.991334   13332 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:05.991342   13332 oci.go:655] temporary error: container kubernetes-upgrade-903000 status is  but expect it to be exited
	I0223 13:16:05.991361   13332 retry.go:31] will retry after 332.687181ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:06.324462   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:06.379688   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:06.379744   13332 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:06.379753   13332 oci.go:655] temporary error: container kubernetes-upgrade-903000 status is  but expect it to be exited
	I0223 13:16:06.379772   13332 retry.go:31] will retry after 550.526617ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:06.931015   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:06.986175   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:06.986227   13332 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:06.986234   13332 oci.go:655] temporary error: container kubernetes-upgrade-903000 status is  but expect it to be exited
	I0223 13:16:06.986257   13332 retry.go:31] will retry after 1.495751988s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:08.482849   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:08.539558   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:08.539600   13332 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:08.539607   13332 oci.go:655] temporary error: container kubernetes-upgrade-903000 status is  but expect it to be exited
	I0223 13:16:08.539627   13332 retry.go:31] will retry after 953.956489ms: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:09.493951   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:09.549540   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:09.549594   13332 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:09.549603   13332 oci.go:655] temporary error: container kubernetes-upgrade-903000 status is  but expect it to be exited
	I0223 13:16:09.549624   13332 retry.go:31] will retry after 2.482186857s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:12.032311   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:12.090769   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:12.090817   13332 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:12.090826   13332 oci.go:655] temporary error: container kubernetes-upgrade-903000 status is  but expect it to be exited
	I0223 13:16:12.090846   13332 retry.go:31] will retry after 3.908292194s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:15.999357   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:16.055640   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:16.055683   13332 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:16.055694   13332 oci.go:655] temporary error: container kubernetes-upgrade-903000 status is  but expect it to be exited
	I0223 13:16:16.055713   13332 retry.go:31] will retry after 5.481934891s: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:21.539157   13332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}
	W0223 13:16:21.598846   13332 cli_runner.go:211] docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}} returned with exit code 1
	I0223 13:16:21.598890   13332 oci.go:653] temporary error verifying shutdown: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:21.598898   13332 oci.go:655] temporary error: container kubernetes-upgrade-903000 status is  but expect it to be exited
	I0223 13:16:21.598930   13332 oci.go:88] couldn't shut down kubernetes-upgrade-903000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	 
	I0223 13:16:21.599008   13332 cli_runner.go:164] Run: docker rm -f -v kubernetes-upgrade-903000
	I0223 13:16:21.657495   13332 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubernetes-upgrade-903000
	W0223 13:16:21.711558   13332 cli_runner.go:211] docker container inspect -f {{.Id}} kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:21.711676   13332 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-903000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:16:21.768185   13332 cli_runner.go:164] Run: docker network rm kubernetes-upgrade-903000
	W0223 13:16:21.882206   13332 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:16:21.882225   13332 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:16:22.883227   13332 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:16:22.910381   13332 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:16:22.910565   13332 start.go:159] libmachine.API.Create for "kubernetes-upgrade-903000" (driver="docker")
	I0223 13:16:22.910613   13332 client.go:168] LocalClient.Create starting
	I0223 13:16:22.910776   13332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:16:22.910876   13332 main.go:141] libmachine: Decoding PEM data...
	I0223 13:16:22.910911   13332 main.go:141] libmachine: Parsing certificate...
	I0223 13:16:22.911003   13332 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:16:22.911078   13332 main.go:141] libmachine: Decoding PEM data...
	I0223 13:16:22.911096   13332 main.go:141] libmachine: Parsing certificate...
	I0223 13:16:22.932622   13332 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-903000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:16:22.990712   13332 cli_runner.go:211] docker network inspect kubernetes-upgrade-903000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:16:22.990814   13332 network_create.go:281] running [docker network inspect kubernetes-upgrade-903000] to gather additional debugging logs...
	I0223 13:16:22.990833   13332 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-903000
	W0223 13:16:23.045052   13332 cli_runner.go:211] docker network inspect kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:23.045092   13332 network_create.go:284] error running [docker network inspect kubernetes-upgrade-903000]: docker network inspect kubernetes-upgrade-903000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-903000
	I0223 13:16:23.045104   13332 network_create.go:286] output of [docker network inspect kubernetes-upgrade-903000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-903000
	
	** /stderr **
	I0223 13:16:23.045184   13332 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:16:23.100735   13332 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:16:23.102257   13332 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:16:23.103771   13332 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:16:23.104034   13332 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014b03f0}
	I0223 13:16:23.104044   13332 network_create.go:123] attempt to create docker network kubernetes-upgrade-903000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:16:23.104112   13332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 kubernetes-upgrade-903000
	W0223 13:16:23.158806   13332 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 kubernetes-upgrade-903000 returned with exit code 1
	W0223 13:16:23.158848   13332 network_create.go:148] failed to create docker network kubernetes-upgrade-903000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 kubernetes-upgrade-903000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:16:23.158862   13332 network_create.go:115] failed to create docker network kubernetes-upgrade-903000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:16:23.160247   13332 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:16:23.160629   13332 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0015a81b0}
	I0223 13:16:23.160645   13332 network_create.go:123] attempt to create docker network kubernetes-upgrade-903000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:16:23.160711   13332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 kubernetes-upgrade-903000
	I0223 13:16:23.247678   13332 network_create.go:107] docker network kubernetes-upgrade-903000 192.168.85.0/24 created
	I0223 13:16:23.247708   13332 kic.go:117] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-903000" container
	I0223 13:16:23.247826   13332 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:16:23.306055   13332 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-903000 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:16:23.359958   13332 oci.go:103] Successfully created a docker volume kubernetes-upgrade-903000
	I0223 13:16:23.360093   13332 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:16:23.498745   13332 cli_runner.go:211] docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:16:23.498793   13332 client.go:171] LocalClient.Create took 588.161927ms
	I0223 13:16:25.499207   13332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:16:25.499312   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:25.555341   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:25.555437   13332 retry.go:31] will retry after 137.1678ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:25.695012   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:25.754798   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:25.754885   13332 retry.go:31] will retry after 259.966657ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:26.016043   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:26.071947   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:26.072043   13332 retry.go:31] will retry after 564.725598ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:26.638127   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:26.696124   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	W0223 13:16:26.696222   13332 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	
	W0223 13:16:26.696237   13332 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:26.696294   13332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:16:26.696350   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:26.752357   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:26.752454   13332 retry.go:31] will retry after 324.521328ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:27.077192   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:27.131369   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:27.131457   13332 retry.go:31] will retry after 229.439709ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:27.361730   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:27.419511   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:27.419611   13332 retry.go:31] will retry after 589.711456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:28.011234   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:28.069720   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	W0223 13:16:28.069839   13332 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	
	W0223 13:16:28.069859   13332 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:28.069863   13332 start.go:128] duration metric: createHost completed in 5.186506609s
	I0223 13:16:28.069937   13332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:16:28.069998   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:28.125517   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:28.125614   13332 retry.go:31] will retry after 209.164216ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:28.336059   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:28.394534   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:28.394616   13332 retry.go:31] will retry after 376.731491ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:28.773674   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:28.832889   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:28.832980   13332 retry.go:31] will retry after 532.22999ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:29.366705   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:29.422789   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	W0223 13:16:29.422886   13332 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	
	W0223 13:16:29.422903   13332 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:29.422963   13332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:16:29.423012   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:29.478564   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:29.478659   13332 retry.go:31] will retry after 366.793342ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:29.846698   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:29.906108   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:29.906199   13332 retry.go:31] will retry after 478.021113ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:30.385742   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:30.445772   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	I0223 13:16:30.445865   13332 retry.go:31] will retry after 571.902792ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:31.018633   13332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000
	W0223 13:16:31.075746   13332 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000 returned with exit code 1
	W0223 13:16:31.075843   13332 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	
	W0223 13:16:31.075860   13332 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubernetes-upgrade-903000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-903000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	I0223 13:16:31.075873   13332 fix.go:57] fixHost completed within 26.509907913s
	I0223 13:16:31.075879   13332 start.go:83] releasing machines lock for "kubernetes-upgrade-903000", held for 26.509966671s
	W0223 13:16:31.076009   13332 out.go:239] * Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-903000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-903000 container: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p kubernetes-upgrade-903000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-903000 container: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:16:31.120544   13332 out.go:177] 
	W0223 13:16:31.141531   13332 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-903000 container: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kubernetes-upgrade-903000 container: docker run --rm --name kubernetes-upgrade-903000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-903000 --entrypoint /usr/bin/test -v kubernetes-upgrade-903000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:16:31.141562   13332 out.go:239] * 
	* 
	W0223 13:16:31.143047   13332 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:16:31.226187   13332 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:232: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-903000 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 80
version_upgrade_test.go:235: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-903000
E0223 13:16:46.555031    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
version_upgrade_test.go:235: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-903000: exit status 82 (15.59861715s)

                                                
                                                
-- stdout --
	* Stopping node "kubernetes-upgrade-903000"  ...
	* Stopping node "kubernetes-upgrade-903000"  ...
	* Stopping node "kubernetes-upgrade-903000"  ...
	* Stopping node "kubernetes-upgrade-903000"  ...
	* Stopping node "kubernetes-upgrade-903000"  ...
	* Stopping node "kubernetes-upgrade-903000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect kubernetes-upgrade-903000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
version_upgrade_test.go:237: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-903000 failed: exit status 82
panic.go:522: *** TestKubernetesUpgrade FAILED at 2023-02-23 13:16:46.862557 -0800 PST m=+2634.849638109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-903000
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-903000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "kubernetes-upgrade-903000",
	        "Id": "235c87f96cbe4d9ab63772de11af29ed1bfdbfefb41d0438d6cfd2264859ab55",
	        "Created": "2023-02-23T21:16:23.21170061Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "kubernetes-upgrade-903000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-903000 -n kubernetes-upgrade-903000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-903000 -n kubernetes-upgrade-903000: exit status 7 (158.778946ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:16:47.082016   13648 status.go:249] status error: host: state: unknown state "kubernetes-upgrade-903000": docker container inspect kubernetes-upgrade-903000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubernetes-upgrade-903000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-903000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-903000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-903000
--- FAIL: TestKubernetesUpgrade (55.12s)

                                                
                                    
x
+
TestMissingContainerUpgrade (201.94s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:317: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.98428552.exe start -p missing-upgrade-640000 --memory=2200 --driver=docker 
E0223 13:12:54.342464    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:54.348796    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:54.360831    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:54.381819    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:54.422491    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:54.503865    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:54.664351    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:54.984454    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:55.624646    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:56.904783    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:12:59.465332    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:13:04.585496    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:13:14.825703    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.98428552.exe start -p missing-upgrade-640000 --memory=2200 --driver=docker : exit status 78 (47.249061929s)

                                                
                                                
-- stdout --
	* [missing-upgrade-640000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-640000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-640000" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.14 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.30 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 10.09 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 21.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 35.41 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 45.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 70.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 85.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 96.09 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 109.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 124.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 134.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 149.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 176.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 188.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 203.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 213.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 222.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 231.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 239.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 252.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 260.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 269.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 279.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 293.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 300.16 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 319.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 331.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 339.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 348.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 356.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 375.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 396.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 408.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 422.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 434.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 443.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 457.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 472.19 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 481.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 485.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 495.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 509.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 524.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 533.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 540.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 21:13:06.142240920 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-640000" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2023-02-23 21:13:26.149240729 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.98428552.exe start -p missing-upgrade-640000 --memory=2200 --driver=docker 
E0223 13:13:35.307944    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.98428552.exe start -p missing-upgrade-640000 --memory=2200 --driver=docker : exit status 70 (19.312488067s)

                                                
                                                
-- stdout --
	* [missing-upgrade-640000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-640000
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Updating the running docker "missing-upgrade-640000" container ...
	* Updating the running docker "missing-upgrade-640000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 189.59 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.44 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.78 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.84 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 69.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 84.45 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 98.70 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 113.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 127.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 142.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 157.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 171.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 201.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 231.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 245.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 259.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 277.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.03 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 306.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 320.98 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 335.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 365.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 379.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 394.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 419.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 434.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 463.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 478.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 507.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 522.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: post-start: sudo mkdir (docker): sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: exit status 126
	stdout:
	connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable
	
	stderr:
	
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-640000" may fix it.: post-start: sudo mkdir (docker): sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: exit status 126
	stdout:
	connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:317: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.98428552.exe start -p missing-upgrade-640000 --memory=2200 --driver=docker 
version_upgrade_test.go:317: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.98428552.exe start -p missing-upgrade-640000 --memory=2200 --driver=docker : exit status 70 (8.221187872s)

                                                
                                                
-- stdout --
	* [missing-upgrade-640000] minikube v1.9.1 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-640000
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-640000" container ...
	* Updating the running docker "missing-upgrade-640000" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: post-start: sudo mkdir (docker): sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: exit status 126
	stdout:
	connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable
	
	stderr:
	
	* 
	X Failed to start docker container. "minikube start -p missing-upgrade-640000" may fix it.: post-start: sudo mkdir (docker): sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: exit status 126
	stdout:
	connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable
	
	stderr:
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:323: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2023-02-23 13:13:57.878057 -0800 PST m=+2465.865535490
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-640000
helpers_test.go:235: (dbg) docker inspect missing-upgrade-640000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6e9c741a269d4945e65b88c55cf48ea1ffd5faa21099e293c3a66ef7819930b5",
	        "Created": "2023-02-23T21:13:14.312540865Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 174833,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2023-02-23T21:13:14.536376637Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/6e9c741a269d4945e65b88c55cf48ea1ffd5faa21099e293c3a66ef7819930b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6e9c741a269d4945e65b88c55cf48ea1ffd5faa21099e293c3a66ef7819930b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/6e9c741a269d4945e65b88c55cf48ea1ffd5faa21099e293c3a66ef7819930b5/hosts",
	        "LogPath": "/var/lib/docker/containers/6e9c741a269d4945e65b88c55cf48ea1ffd5faa21099e293c3a66ef7819930b5/6e9c741a269d4945e65b88c55cf48ea1ffd5faa21099e293c3a66ef7819930b5-json.log",
	        "Name": "/missing-upgrade-640000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-640000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/8a5e6ab29a789a27bcc57cd1d7fbbdd9d57a96d0740cd7e2f2a95bd9fa543c2b-init/diff:/var/lib/docker/overlay2/62b3a4eb3f919655fd48b775cdd122f8f758dd355101f7ae1f917c82acd0cfd5/diff:/var/lib/docker/overlay2/d76ebdb5ff84afcb7a45bbd90dfd141b8d212bd45d69899937230561fbd23a21/diff:/var/lib/docker/overlay2/10094c0c47905e41e12e160eea6cdb077e0ba5917aac03db3a6da58e38b1a30b/diff:/var/lib/docker/overlay2/9e22cb9759df443caa45b9c262bb33a3b61cae6d29f67c9deb4fee11cad46536/diff:/var/lib/docker/overlay2/82f0d1a16c7c97c68b32acdd08f436fa6c2de555d65ff82863fcb08991471f7c/diff:/var/lib/docker/overlay2/6e62aca1d088bbef7510445a394aee7b869c41e827ae7927a8181330f5809d32/diff:/var/lib/docker/overlay2/55fc9e0f1dd06920593dea87ae2cdd9b9d7e751ea2d3c3ba5360e67721cd955e/diff:/var/lib/docker/overlay2/616d26d496c2a8a0b038b552fb5a9ada5602ee8b665fd890af2aa70f844758a9/diff:/var/lib/docker/overlay2/236cdd6839a81e88d64f65953490d6e48421415e8629455878d87fc8e90fd78b/diff:/var/lib/docker/overlay2/31a751
1998e2b9d3fda5edb408e36f62fc4c3ce83aadaa8e8ba1a1c0ba2ae462/diff:/var/lib/docker/overlay2/eac56b20e5dfc60fdee9758533190e818a798fe53b14d04970a7eba485a16bdb/diff:/var/lib/docker/overlay2/1c3cbc661d482443e57a595a048facfddabd2d1ebfd9b6e6ff2cf37eaba8ea05/diff:/var/lib/docker/overlay2/d99a980e408533c076b9d911968a09807d3b6b758b06031a428aab7c2f57bf98/diff:/var/lib/docker/overlay2/f75caf33199df34c923189d94fc59f364bae60bd2dfe7a59dc0d79e2c1ea0b7e/diff:/var/lib/docker/overlay2/eca12677cfc62357835eb464158b800120ce690e882e082d19d14fab2090b913/diff:/var/lib/docker/overlay2/a60aa76e326bd91f8c34eabdf426156c3d9416d7f9bba356ae6e5e8da5541502/diff:/var/lib/docker/overlay2/46c1d1da25616201dc3adf027761bd60ec98232390ae009d9f666c8b73056bf6/diff:/var/lib/docker/overlay2/09e174d148720337e67c054dc8484fec9719f46234296ec15f197f9bc26d9824/diff:/var/lib/docker/overlay2/2334102829d1acb0e3109529ede7b189866e1426779628291aab9854b62485bb/diff:/var/lib/docker/overlay2/f7b4bd0614824bde8442a434d51b118705d087428e9c8a083c66af4840090838/diff:/var/lib/d
ocker/overlay2/4cffb4ff0f7618836ffbdd99b0084ba4b8575c4e4dd3fa19a5d647d205d8b7f9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a5e6ab29a789a27bcc57cd1d7fbbdd9d57a96d0740cd7e2f2a95bd9fa543c2b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a5e6ab29a789a27bcc57cd1d7fbbdd9d57a96d0740cd7e2f2a95bd9fa543c2b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a5e6ab29a789a27bcc57cd1d7fbbdd9d57a96d0740cd7e2f2a95bd9fa543c2b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-640000",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-640000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-640000",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-640000",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-640000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c327564dbfd6090c5c9a4fb0a1fc3484756d47cf085561c05b68f73d0ee7281c",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52465"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52466"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "52467"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/c327564dbfd6",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "9c2644f881fca35722b98c6877097872cdb5bf7aea93a6c5d13fecd004ffddf5",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "6da2154054de0f6da1ccf786be7965bbad7953e93dc14b36c61b0f7212051f60",
	                    "EndpointID": "9c2644f881fca35722b98c6877097872cdb5bf7aea93a6c5d13fecd004ffddf5",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-640000 -n missing-upgrade-640000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-640000 -n missing-upgrade-640000: exit status 6 (378.115515ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:13:58.304015   13009 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-640000" does not appear in /Users/jenkins/minikube-integration/15909-825/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-640000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-640000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-640000
E0223 13:14:12.933209    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 13:14:16.269284    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
E0223 13:15:38.191833    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Non-zero exit: out/minikube-darwin-amd64 delete -p missing-upgrade-640000: signal: killed (2m0.004022523s)

                                                
                                                
-- stdout --
	* Deleting "missing-upgrade-640000" in docker ...
	* Deleting container "missing-upgrade-640000" ...
	* Stopping node "missing-upgrade-640000"  ...
	* Powering off "missing-upgrade-640000" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:14:11.816943   13019 delete.go:56] error deleting container "missing-upgrade-640000". You may want to delete it manually :
	delete missing-upgrade-640000: docker rm -f -v missing-upgrade-640000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Could not kill running container 6e9c741a269d4945e65b88c55cf48ea1ffd5faa21099e293c3a66ef7819930b5, cannot remove - tried to kill container, but did not receive an exit event

                                                
                                                
** /stderr **
helpers_test.go:180: failed cleanup: signal: killed
--- FAIL: TestMissingContainerUpgrade (201.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1021.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.249915275.exe start -p stopped-upgrade-942000 --memory=2200 --vm-driver=docker 
E0223 13:16:09.885200    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.249915275.exe start -p stopped-upgrade-942000 --memory=2200 --vm-driver=docker : exit status 70 (3m34.500385893s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-942000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig3592301617
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 74c91d21f53225bf2b5625106d73bffb03258cd6432dd67788c47d4f45b7776b
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "stopped-upgrade-942000" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: cc87eb56924bc3ad68ce2652b868c9bc0819373f6882e72d593aabeab6f90f20
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p stopped-upgrade-942000", then "minikube start -p stopped-upgrade-942000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 183.01 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 1.89 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.58 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 19.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 32.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 36.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.51 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 68.06 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 81.17 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 92.16 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 105.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 120.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 130.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 145.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 155.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 164.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 174.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 185.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 199.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 208.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 216.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 224.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 232.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 240.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 254.42 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 268.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 277.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 285.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 298.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 306.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 314.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 340.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 364.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 372.28 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 380.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 388.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 402.61 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 416.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 423.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 437.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 446.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 469.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 483.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 496.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 508.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 516.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd6
4.tar.lz4: 529.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: cc87eb56924bc3ad68ce2652b868c9bc0819373f6882e72d593aabeab6f90f20
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.249915275.exe start -p stopped-upgrade-942000 --memory=2200 --vm-driver=docker 
E0223 13:19:49.611504    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.249915275.exe start -p stopped-upgrade-942000 --memory=2200 --vm-driver=docker : exit status 70 (6m42.006103656s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-942000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig249909168
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "stopped-upgrade-942000" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: e53c25e1aabdeae22bbb31e1f7d97a6a102b9a223b7abade9e493a7200ad03b6
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "stopped-upgrade-942000" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: e33cf12da379723951f5d1b164a82b5b12fa7d7880f8bb4fc2bf34498e551acf
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p stopped-upgrade-942000", then "minikube start -p stopped-upgrade-942000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.29 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.31 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 15.05 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 29.50 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 44.19 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 54.87 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 69.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 84.08 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 98.80 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 108.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 122.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.31 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 152.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 166.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 179.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 190.06 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 204.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 218.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 233.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 248.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 262.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 277.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 291.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 302.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 317.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 332.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 346.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 360.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 375.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 389.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 404.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 415.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 429.73 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 444.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 473.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 484.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 498.83 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 513.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 525.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 536.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: e33cf12da379723951f5d1b164a82b5b12fa7d7880f8bb4fc2bf34498e551acf
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:191: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.249915275.exe start -p stopped-upgrade-942000 --memory=2200 --vm-driver=docker 
E0223 13:26:46.557338    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.249915275.exe start -p stopped-upgrade-942000 --memory=2200 --vm-driver=docker : exit status 70 (6m43.065553692s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-942000] minikube v1.9.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1753743458
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* docker "stopped-upgrade-942000" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 2054ef29f86a8b5fef1ce0320deb188ce585a03beb9d4ee95e32a5556513fdcd
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* docker "stopped-upgrade-942000" container is missing, will recreate.
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 93c2a05ac0a9d8d7f037dd80db11f2c64ab9087a8aaf2b2c364ac640d6582a99
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	  - Run: "minikube delete -p stopped-upgrade-942000", then "minikube start -p stopped-upgrade-942000 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 192.29 KiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 2.08 MiB /    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.61 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 19.86 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 24.69 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 30.91 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 37.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 46.55 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 56.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 67.51 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 80.36 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 90.05 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 102.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.55 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 129.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 140.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 154.86 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 167.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 181.56 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 194.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 205.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 230.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 244.67 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 258.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 271.87 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 283.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 298.66 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 316.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 327.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 338.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 366.95 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 381.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 395.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 407.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 421.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.t
ar.lz4: 435.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 450.20 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 455.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 466.50 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 493.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.01 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 515.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 526.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 540.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: recreate: creating host: create: creating: create kic node: create container: failed args: [run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname stopped-upgrade-942000 --name stopped-upgrade-942000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=stopped-upgrade-942000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=stopped-upgrade-942000 --volume stopped-upgrade-942000:/var --cpus=2 --memory=2200mb --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81] output: 93c2a05ac0a9d8d7f037dd80db11f2c64ab9087a8aaf2b2c364ac640d6582a99
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	: exit status 125
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:197: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (1021.96s)

                                                
                                    
x
+
TestPause/serial/Start (38.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-720000 --memory=2048 --install-addons=false --wait=all --driver=docker 
pause_test.go:80: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p pause-720000 --memory=2048 --install-addons=false --wait=all --driver=docker : exit status 80 (38.128980931s)

                                                
                                                
-- stdout --
	* [pause-720000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node pause-720000 in cluster pause-720000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* docker "pause-720000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for pause-720000 container: docker run --rm --name pause-720000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-720000 --entrypoint /usr/bin/test -v pause-720000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p pause-720000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for pause-720000 container: docker run --rm --name pause-720000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-720000 --entrypoint /usr/bin/test -v pause-720000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for pause-720000 container: docker run --rm --name pause-720000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=pause-720000 --entrypoint /usr/bin/test -v pause-720000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
pause_test.go:82: failed to start minikube with args: "out/minikube-darwin-amd64 start -p pause-720000 --memory=2048 --install-addons=false --wait=all --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestPause/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect pause-720000
helpers_test.go:235: (dbg) docker inspect pause-720000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "pause-720000",
	        "Id": "b2db2e5fff684ca0e93069a230f641cfef713b28bba86ed05eec4fc12355a111",
	        "Created": "2023-02-23T21:17:17.399264552Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "pause-720000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p pause-720000 -n pause-720000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p pause-720000 -n pause-720000: exit status 7 (99.534924ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:17:26.170092   13881 status.go:249] status error: host: state: unknown state "pause-720000": docker container inspect pause-720000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: pause-720000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-720000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestPause/serial/Start (38.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (36.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-413000 --driver=docker 
E0223 13:17:54.341384    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-413000 --driver=docker : exit status 80 (36.740370596s)

                                                
                                                
-- stdout --
	* [NoKubernetes-413000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node NoKubernetes-413000 in cluster NoKubernetes-413000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* docker "NoKubernetes-413000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-413000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-413000 --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-413000
helpers_test.go:235: (dbg) docker inspect NoKubernetes-413000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-413000",
	        "Id": "1f9f57d10d4536977c48e401affd9a1b5d91c77dcce354de798610787efeb0ca",
	        "Created": "2023-02-23T21:17:55.739786808Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-413000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000: exit status 7 (100.742872ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:18:04.260460   14115 status.go:249] status error: host: state: unknown state "NoKubernetes-413000": docker container inspect NoKubernetes-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-413000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-413000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (36.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (62.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-413000 --no-kubernetes --driver=docker 
E0223 13:18:22.033818    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-413000 --no-kubernetes --driver=docker : exit status 80 (1m2.255050621s)

                                                
                                                
-- stdout --
	* [NoKubernetes-413000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-413000
	* Pulling base image ...
	* docker "NoKubernetes-413000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* docker "NoKubernetes-413000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-413000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:114: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-413000 --no-kubernetes --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartWithStopK8s]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-413000
helpers_test.go:235: (dbg) docker inspect NoKubernetes-413000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-413000",
	        "Id": "6abe41a37ff8b81412d0dc5cb1ecba7462c45b7862ff73d2c8d235b7911da33d",
	        "Created": "2023-02-23T21:18:57.741138634Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-413000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000: exit status 7 (101.360068ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:19:06.675616   14423 status.go:249] status error: host: state: unknown state "NoKubernetes-413000": docker container inspect NoKubernetes-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-413000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-413000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithStopK8s (62.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (63.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-413000 --no-kubernetes --driver=docker 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-413000 --no-kubernetes --driver=docker : exit status 80 (1m3.654245853s)

                                                
                                                
-- stdout --
	* [NoKubernetes-413000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-413000
	* Pulling base image ...
	* docker "NoKubernetes-413000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* docker "NoKubernetes-413000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-413000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-413000 --no-kubernetes --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Start]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-413000
helpers_test.go:235: (dbg) docker inspect NoKubernetes-413000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-413000",
	        "Id": "f87c286d3dbe5b03727e9847c16ef42ff57f20553013f1941084d0444d7d4d3e",
	        "Created": "2023-02-23T21:20:01.55979983Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-413000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000: exit status 7 (100.300213ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:20:10.490106   14798 status.go:249] status error: host: state: unknown state "NoKubernetes-413000": docker container inspect NoKubernetes-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-413000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-413000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Start (63.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (13.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-413000
no_kubernetes_test.go:158: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p NoKubernetes-413000: exit status 82 (13.690092853s)

                                                
                                                
-- stdout --
	* Stopping node "NoKubernetes-413000"  ...
	* Stopping node "NoKubernetes-413000"  ...
	* Stopping node "NoKubernetes-413000"  ...
	* Stopping node "NoKubernetes-413000"  ...
	* Stopping node "NoKubernetes-413000"  ...
	* Stopping node "NoKubernetes-413000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect NoKubernetes-413000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-413000
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:160: Failed to stop minikube "out/minikube-darwin-amd64 stop -p NoKubernetes-413000" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-413000
helpers_test.go:235: (dbg) docker inspect NoKubernetes-413000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-413000",
	        "Id": "f87c286d3dbe5b03727e9847c16ef42ff57f20553013f1941084d0444d7d4d3e",
	        "Created": "2023-02-23T21:20:01.55979983Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-413000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000: exit status 7 (100.445321ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:20:35.261232   14914 status.go:249] status error: host: state: unknown state "NoKubernetes-413000": docker container inspect NoKubernetes-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-413000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-413000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/Stop (13.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (61.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-413000 --driver=docker 
E0223 13:21:09.885870    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-413000 --driver=docker : exit status 80 (1m0.971293635s)

                                                
                                                
-- stdout --
	* [NoKubernetes-413000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-413000
	* Pulling base image ...
	* docker "NoKubernetes-413000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	* docker "NoKubernetes-413000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=5895MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p NoKubernetes-413000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for NoKubernetes-413000 container: docker run --rm --name NoKubernetes-413000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-413000 --entrypoint /usr/bin/test -v NoKubernetes-413000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-413000 --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestNoKubernetes/serial/StartNoArgs]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect NoKubernetes-413000
helpers_test.go:235: (dbg) docker inspect NoKubernetes-413000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "NoKubernetes-413000",
	        "Id": "329a1c11b2809b8e47660fc8f6a0126df73ae36c9ba7ec8789d626286aaafa00",
	        "Created": "2023-02-23T21:21:27.191191107Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "NoKubernetes-413000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-413000 -n NoKubernetes-413000: exit status 7 (101.570254ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:21:36.394182   15220 status.go:249] status error: host: state: unknown state "NoKubernetes-413000": docker container inspect NoKubernetes-413000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-413000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-413000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (61.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker 
E0223 13:21:46.556692    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p auto-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker : exit status 80 (38.932548328s)

                                                
                                                
-- stdout --
	* [auto-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node auto-235000 in cluster auto-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "auto-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:21:37.462072   15248 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:21:37.462222   15248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:21:37.462227   15248 out.go:309] Setting ErrFile to fd 2...
	I0223 13:21:37.462231   15248 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:21:37.462336   15248 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:21:37.463683   15248 out.go:303] Setting JSON to false
	I0223 13:21:37.482404   15248 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3072,"bootTime":1677184225,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:21:37.482487   15248 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:21:37.504609   15248 out.go:177] * [auto-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:21:37.546279   15248 notify.go:220] Checking for updates...
	I0223 13:21:37.546305   15248 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:21:37.567924   15248 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:21:37.589756   15248 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:21:37.632569   15248 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:21:37.674510   15248 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:21:37.695683   15248 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:21:37.718293   15248 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:21:37.718454   15248 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:21:37.718592   15248 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:21:37.718665   15248 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:21:37.781886   15248 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:21:37.782004   15248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:21:37.927274   15248 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:21:37.833036978 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:21:37.949180   15248 out.go:177] * Using the docker driver based on user configuration
	I0223 13:21:37.971080   15248 start.go:296] selected driver: docker
	I0223 13:21:37.971106   15248 start.go:857] validating driver "docker" against <nil>
	I0223 13:21:37.971123   15248 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:21:37.975015   15248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:21:38.115661   15248 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:21:38.025184562 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:21:38.115758   15248 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:21:38.115941   15248 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:21:38.137804   15248 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:21:38.159391   15248 cni.go:84] Creating CNI manager for ""
	I0223 13:21:38.159458   15248 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:21:38.159474   15248 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 13:21:38.159511   15248 start_flags.go:319] config:
	{Name:auto-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:auto-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:21:38.203608   15248 out.go:177] * Starting control plane node auto-235000 in cluster auto-235000
	I0223 13:21:38.225204   15248 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:21:38.246343   15248 out.go:177] * Pulling base image ...
	I0223 13:21:38.289330   15248 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:21:38.289325   15248 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:21:38.289419   15248 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:21:38.289437   15248 cache.go:57] Caching tarball of preloaded images
	I0223 13:21:38.289682   15248 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:21:38.289702   15248 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:21:38.290771   15248 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/auto-235000/config.json ...
	I0223 13:21:38.290910   15248 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/auto-235000/config.json: {Name:mk563b20f08b7772587446efc801f63b501e6c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:21:38.348315   15248 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:21:38.348334   15248 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:21:38.348352   15248 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:21:38.348416   15248 start.go:364] acquiring machines lock for auto-235000: {Name:mkd2e96e8ac32591a6855eeb6005668251ebd54d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:21:38.348560   15248 start.go:368] acquired machines lock for "auto-235000" in 132.336µs
	I0223 13:21:38.348594   15248 start.go:93] Provisioning new machine with config: &{Name:auto-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:auto-235000 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:21:38.348697   15248 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:21:38.370702   15248 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:21:38.371176   15248 start.go:159] libmachine.API.Create for "auto-235000" (driver="docker")
	I0223 13:21:38.371222   15248 client.go:168] LocalClient.Create starting
	I0223 13:21:38.371441   15248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:21:38.371536   15248 main.go:141] libmachine: Decoding PEM data...
	I0223 13:21:38.371565   15248 main.go:141] libmachine: Parsing certificate...
	I0223 13:21:38.371693   15248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:21:38.371757   15248 main.go:141] libmachine: Decoding PEM data...
	I0223 13:21:38.371781   15248 main.go:141] libmachine: Parsing certificate...
	I0223 13:21:38.372602   15248 cli_runner.go:164] Run: docker network inspect auto-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:21:38.427985   15248 cli_runner.go:211] docker network inspect auto-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:21:38.428089   15248 network_create.go:281] running [docker network inspect auto-235000] to gather additional debugging logs...
	I0223 13:21:38.428105   15248 cli_runner.go:164] Run: docker network inspect auto-235000
	W0223 13:21:38.482127   15248 cli_runner.go:211] docker network inspect auto-235000 returned with exit code 1
	I0223 13:21:38.482158   15248 network_create.go:284] error running [docker network inspect auto-235000]: docker network inspect auto-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-235000
	I0223 13:21:38.482179   15248 network_create.go:286] output of [docker network inspect auto-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-235000
	
	** /stderr **
	I0223 13:21:38.482306   15248 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:21:38.539311   15248 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:21:38.539677   15248 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d7ce00}
	I0223 13:21:38.539691   15248 network_create.go:123] attempt to create docker network auto-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:21:38.539761   15248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-235000 auto-235000
	W0223 13:21:38.595386   15248 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-235000 auto-235000 returned with exit code 1
	W0223 13:21:38.595424   15248 network_create.go:148] failed to create docker network auto-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-235000 auto-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:21:38.595443   15248 network_create.go:115] failed to create docker network auto-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:21:38.596772   15248 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:21:38.597098   15248 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ec8640}
	I0223 13:21:38.597109   15248 network_create.go:123] attempt to create docker network auto-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:21:38.597179   15248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-235000 auto-235000
	I0223 13:21:38.684720   15248 network_create.go:107] docker network auto-235000 192.168.67.0/24 created
	I0223 13:21:38.684764   15248 kic.go:117] calculated static IP "192.168.67.2" for the "auto-235000" container
	I0223 13:21:38.684885   15248 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:21:38.741714   15248 cli_runner.go:164] Run: docker volume create auto-235000 --label name.minikube.sigs.k8s.io=auto-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:21:38.797043   15248 oci.go:103] Successfully created a docker volume auto-235000
	I0223 13:21:38.797173   15248 cli_runner.go:164] Run: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:21:39.010408   15248 cli_runner.go:211] docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:21:39.010452   15248 client.go:171] LocalClient.Create took 639.220093ms
	I0223 13:21:41.012776   15248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:21:41.012918   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:21:41.070139   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:21:41.070262   15248 retry.go:31] will retry after 227.457662ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:41.299839   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:21:41.356744   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:21:41.356826   15248 retry.go:31] will retry after 347.36625ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:41.706596   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:21:41.764504   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:21:41.764591   15248 retry.go:31] will retry after 744.153988ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:42.511092   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:21:42.568782   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	W0223 13:21:42.568879   15248 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	
	W0223 13:21:42.568894   15248 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:42.568950   15248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:21:42.568995   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:21:42.623609   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:21:42.623708   15248 retry.go:31] will retry after 173.015601ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:42.799036   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:21:42.858553   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:21:42.858634   15248 retry.go:31] will retry after 510.582587ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:43.370391   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:21:43.428725   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:21:43.428806   15248 retry.go:31] will retry after 545.553645ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:43.975313   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:21:44.034768   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	W0223 13:21:44.034859   15248 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	
	W0223 13:21:44.034875   15248 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:44.034887   15248 start.go:128] duration metric: createHost completed in 5.68617325s
	I0223 13:21:44.034895   15248 start.go:83] releasing machines lock for "auto-235000", held for 5.686313562s
	W0223 13:21:44.034911   15248 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for auto-235000 container: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:21:44.035356   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:44.090377   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:44.090431   15248 delete.go:82] Unable to get host status for auto-235000, assuming it has already been deleted: state: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	W0223 13:21:44.090590   15248 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for auto-235000 container: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for auto-235000 container: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:21:44.090599   15248 start.go:706] Will try again in 5 seconds ...
	I0223 13:21:49.091591   15248 start.go:364] acquiring machines lock for auto-235000: {Name:mkd2e96e8ac32591a6855eeb6005668251ebd54d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:21:49.091753   15248 start.go:368] acquired machines lock for "auto-235000" in 127.115µs
	I0223 13:21:49.091789   15248 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:21:49.091803   15248 fix.go:55] fixHost starting: 
	I0223 13:21:49.092238   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:49.147647   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:49.147688   15248 fix.go:103] recreateIfNeeded on auto-235000: state= err=unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:49.147705   15248 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:21:49.170379   15248 out.go:177] * docker "auto-235000" container is missing, will recreate.
	I0223 13:21:49.212180   15248 delete.go:124] DEMOLISHING auto-235000 ...
	I0223 13:21:49.212380   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:49.268772   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:21:49.268822   15248 stop.go:75] unable to get state: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:49.268837   15248 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:49.269249   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:49.323817   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:49.323865   15248 delete.go:82] Unable to get host status for auto-235000, assuming it has already been deleted: state: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:49.323953   15248 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-235000
	W0223 13:21:49.378258   15248 cli_runner.go:211] docker container inspect -f {{.Id}} auto-235000 returned with exit code 1
	I0223 13:21:49.378289   15248 kic.go:367] could not find the container auto-235000 to remove it. will try anyways
	I0223 13:21:49.378366   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:49.432100   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:21:49.432144   15248 oci.go:84] error getting container status, will try to delete anyways: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:49.432234   15248 cli_runner.go:164] Run: docker exec --privileged -t auto-235000 /bin/bash -c "sudo init 0"
	W0223 13:21:49.485571   15248 cli_runner.go:211] docker exec --privileged -t auto-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:21:49.485602   15248 oci.go:641] error shutdown auto-235000: docker exec --privileged -t auto-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:50.487371   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:50.543334   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:50.543375   15248 oci.go:653] temporary error verifying shutdown: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:50.543384   15248 oci.go:655] temporary error: container auto-235000 status is  but expect it to be exited
	I0223 13:21:50.543402   15248 retry.go:31] will retry after 700.817345ms: couldn't verify container is exited. %v: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:51.246037   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:51.303128   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:51.303179   15248 oci.go:653] temporary error verifying shutdown: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:51.303187   15248 oci.go:655] temporary error: container auto-235000 status is  but expect it to be exited
	I0223 13:21:51.303209   15248 retry.go:31] will retry after 730.109423ms: couldn't verify container is exited. %v: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:52.035329   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:52.090720   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:52.090766   15248 oci.go:653] temporary error verifying shutdown: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:52.090776   15248 oci.go:655] temporary error: container auto-235000 status is  but expect it to be exited
	I0223 13:21:52.090804   15248 retry.go:31] will retry after 1.479712288s: couldn't verify container is exited. %v: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:53.571506   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:53.628269   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:53.628312   15248 oci.go:653] temporary error verifying shutdown: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:53.628329   15248 oci.go:655] temporary error: container auto-235000 status is  but expect it to be exited
	I0223 13:21:53.628348   15248 retry.go:31] will retry after 1.783372612s: couldn't verify container is exited. %v: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:55.413323   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:55.469118   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:55.469162   15248 oci.go:653] temporary error verifying shutdown: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:55.469170   15248 oci.go:655] temporary error: container auto-235000 status is  but expect it to be exited
	I0223 13:21:55.469191   15248 retry.go:31] will retry after 3.721941488s: couldn't verify container is exited. %v: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:59.193624   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:21:59.251439   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:21:59.251486   15248 oci.go:653] temporary error verifying shutdown: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:21:59.251493   15248 oci.go:655] temporary error: container auto-235000 status is  but expect it to be exited
	I0223 13:21:59.251511   15248 retry.go:31] will retry after 2.170943393s: couldn't verify container is exited. %v: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:01.423422   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:22:01.481144   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:01.481188   15248 oci.go:653] temporary error verifying shutdown: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:01.481196   15248 oci.go:655] temporary error: container auto-235000 status is  but expect it to be exited
	I0223 13:22:01.481217   15248 retry.go:31] will retry after 4.273996567s: couldn't verify container is exited. %v: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:05.757081   15248 cli_runner.go:164] Run: docker container inspect auto-235000 --format={{.State.Status}}
	W0223 13:22:05.812637   15248 cli_runner.go:211] docker container inspect auto-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:05.812680   15248 oci.go:653] temporary error verifying shutdown: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:05.812688   15248 oci.go:655] temporary error: container auto-235000 status is  but expect it to be exited
	I0223 13:22:05.812730   15248 oci.go:88] couldn't shut down auto-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "auto-235000": docker container inspect auto-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	 
	I0223 13:22:05.812820   15248 cli_runner.go:164] Run: docker rm -f -v auto-235000
	I0223 13:22:05.869449   15248 cli_runner.go:164] Run: docker container inspect -f {{.Id}} auto-235000
	W0223 13:22:05.925495   15248 cli_runner.go:211] docker container inspect -f {{.Id}} auto-235000 returned with exit code 1
	I0223 13:22:05.925605   15248 cli_runner.go:164] Run: docker network inspect auto-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:22:05.980740   15248 cli_runner.go:164] Run: docker network rm auto-235000
	W0223 13:22:06.090025   15248 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:22:06.090045   15248 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:22:07.091255   15248 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:22:07.113186   15248 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:22:07.113348   15248 start.go:159] libmachine.API.Create for "auto-235000" (driver="docker")
	I0223 13:22:07.113385   15248 client.go:168] LocalClient.Create starting
	I0223 13:22:07.113590   15248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:22:07.113686   15248 main.go:141] libmachine: Decoding PEM data...
	I0223 13:22:07.113706   15248 main.go:141] libmachine: Parsing certificate...
	I0223 13:22:07.113793   15248 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:22:07.113861   15248 main.go:141] libmachine: Decoding PEM data...
	I0223 13:22:07.113884   15248 main.go:141] libmachine: Parsing certificate...
	I0223 13:22:07.114603   15248 cli_runner.go:164] Run: docker network inspect auto-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:22:07.169751   15248 cli_runner.go:211] docker network inspect auto-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:22:07.169844   15248 network_create.go:281] running [docker network inspect auto-235000] to gather additional debugging logs...
	I0223 13:22:07.169864   15248 cli_runner.go:164] Run: docker network inspect auto-235000
	W0223 13:22:07.224237   15248 cli_runner.go:211] docker network inspect auto-235000 returned with exit code 1
	I0223 13:22:07.224273   15248 network_create.go:284] error running [docker network inspect auto-235000]: docker network inspect auto-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-235000
	I0223 13:22:07.224285   15248 network_create.go:286] output of [docker network inspect auto-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-235000
	
	** /stderr **
	I0223 13:22:07.224373   15248 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:22:07.282179   15248 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:07.283674   15248 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:07.285156   15248 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:07.285443   15248 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011171f0}
	I0223 13:22:07.285456   15248 network_create.go:123] attempt to create docker network auto-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:22:07.285523   15248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-235000 auto-235000
	W0223 13:22:07.340491   15248 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-235000 auto-235000 returned with exit code 1
	W0223 13:22:07.340522   15248 network_create.go:148] failed to create docker network auto-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-235000 auto-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:22:07.340536   15248 network_create.go:115] failed to create docker network auto-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:22:07.342107   15248 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:07.342441   15248 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011ca0e0}
	I0223 13:22:07.342453   15248 network_create.go:123] attempt to create docker network auto-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:22:07.342521   15248 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-235000 auto-235000
	I0223 13:22:07.429099   15248 network_create.go:107] docker network auto-235000 192.168.85.0/24 created
	I0223 13:22:07.429126   15248 kic.go:117] calculated static IP "192.168.85.2" for the "auto-235000" container
	I0223 13:22:07.429246   15248 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:22:07.495082   15248 cli_runner.go:164] Run: docker volume create auto-235000 --label name.minikube.sigs.k8s.io=auto-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:22:07.549196   15248 oci.go:103] Successfully created a docker volume auto-235000
	I0223 13:22:07.549313   15248 cli_runner.go:164] Run: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:22:07.725652   15248 cli_runner.go:211] docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:22:07.725691   15248 client.go:171] LocalClient.Create took 612.294756ms
	I0223 13:22:09.726576   15248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:22:09.726713   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:09.782796   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:09.782881   15248 retry.go:31] will retry after 365.075211ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:10.149640   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:10.206033   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:10.206133   15248 retry.go:31] will retry after 220.252514ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:10.427501   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:10.482860   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:10.482955   15248 retry.go:31] will retry after 425.12767ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:10.909716   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:10.965880   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	W0223 13:22:10.965970   15248 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	
	W0223 13:22:10.965988   15248 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:10.966051   15248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:22:10.966102   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:11.020598   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:11.020694   15248 retry.go:31] will retry after 295.322474ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:11.316558   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:11.373127   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:11.373221   15248 retry.go:31] will retry after 272.588719ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:11.646899   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:11.702786   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:11.702871   15248 retry.go:31] will retry after 521.337058ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:12.225787   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:12.281745   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:12.281831   15248 retry.go:31] will retry after 512.193021ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:12.795808   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:12.851666   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	W0223 13:22:12.851773   15248 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	
	W0223 13:22:12.851790   15248 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:12.851794   15248 start.go:128] duration metric: createHost completed in 5.760499701s
	I0223 13:22:12.851861   15248 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:22:12.851915   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:12.905784   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:12.905880   15248 retry.go:31] will retry after 349.412833ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:13.255617   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:13.311697   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:13.311787   15248 retry.go:31] will retry after 448.146361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:13.761361   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:13.818834   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:13.818916   15248 retry.go:31] will retry after 809.300541ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:14.629222   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:14.684571   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	W0223 13:22:14.684655   15248 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	
	W0223 13:22:14.684673   15248 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:14.684744   15248 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:22:14.684805   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:14.739163   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:14.739243   15248 retry.go:31] will retry after 246.247971ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:14.987528   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:15.043806   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:15.043896   15248 retry.go:31] will retry after 405.270551ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:15.449551   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:15.505146   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	I0223 13:22:15.505229   15248 retry.go:31] will retry after 635.37609ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:16.141485   15248 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000
	W0223 13:22:16.197594   15248 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000 returned with exit code 1
	W0223 13:22:16.197684   15248 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	
	W0223 13:22:16.197704   15248 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "auto-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: auto-235000
	I0223 13:22:16.197709   15248 fix.go:57] fixHost completed within 27.105844463s
	I0223 13:22:16.197717   15248 start.go:83] releasing machines lock for "auto-235000", held for 27.105888417s
	W0223 13:22:16.197937   15248 out.go:239] * Failed to start docker container. Running "minikube delete -p auto-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for auto-235000 container: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p auto-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for auto-235000 container: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:22:16.241226   15248 out.go:177] 
	W0223 13:22:16.262517   15248 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for auto-235000 container: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for auto-235000 container: docker run --rm --name auto-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-235000 --entrypoint /usr/bin/test -v auto-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:22:16.262543   15248 out.go:239] * 
	* 
	W0223 13:22:16.263907   15248 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:22:16.328219   15248 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/auto/Start (38.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (39.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker 
E0223 13:22:54.342626    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kindnet-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker : exit status 80 (39.825922192s)

                                                
                                                
-- stdout --
	* [kindnet-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kindnet-235000 in cluster kindnet-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "kindnet-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:22:24.557285   15651 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:22:24.557438   15651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:22:24.557443   15651 out.go:309] Setting ErrFile to fd 2...
	I0223 13:22:24.557447   15651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:22:24.557550   15651 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:22:24.559003   15651 out.go:303] Setting JSON to false
	I0223 13:22:24.577575   15651 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3119,"bootTime":1677184225,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:22:24.577666   15651 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:22:24.599643   15651 out.go:177] * [kindnet-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:22:24.621025   15651 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:22:24.621006   15651 notify.go:220] Checking for updates...
	I0223 13:22:24.642675   15651 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:22:24.664745   15651 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:22:24.685857   15651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:22:24.707707   15651 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:22:24.728594   15651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:22:24.749958   15651 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:22:24.750068   15651 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:22:24.750139   15651 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:22:24.750171   15651 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:22:24.809726   15651 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:22:24.809838   15651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:22:24.952489   15651 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:22:24.861072356 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:22:24.996122   15651 out.go:177] * Using the docker driver based on user configuration
	I0223 13:22:25.017063   15651 start.go:296] selected driver: docker
	I0223 13:22:25.017103   15651 start.go:857] validating driver "docker" against <nil>
	I0223 13:22:25.017123   15651 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:22:25.020902   15651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:22:25.161698   15651 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:22:25.070683046 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:22:25.161833   15651 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:22:25.162015   15651 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:22:25.183857   15651 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:22:25.205609   15651 cni.go:84] Creating CNI manager for "kindnet"
	I0223 13:22:25.205634   15651 start_flags.go:314] Found "CNI" CNI - setting NetworkPlugin=cni
	I0223 13:22:25.205651   15651 start_flags.go:319] config:
	{Name:kindnet-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:22:25.248403   15651 out.go:177] * Starting control plane node kindnet-235000 in cluster kindnet-235000
	I0223 13:22:25.269737   15651 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:22:25.291602   15651 out.go:177] * Pulling base image ...
	I0223 13:22:25.333487   15651 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:22:25.333543   15651 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:22:25.333577   15651 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:22:25.333593   15651 cache.go:57] Caching tarball of preloaded images
	I0223 13:22:25.333825   15651 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:22:25.333845   15651 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:22:25.334879   15651 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/kindnet-235000/config.json ...
	I0223 13:22:25.335018   15651 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/kindnet-235000/config.json: {Name:mk85562849712ee4d43fb8776847116ed3151df4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:22:25.390535   15651 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:22:25.390552   15651 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:22:25.390660   15651 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:22:25.390698   15651 start.go:364] acquiring machines lock for kindnet-235000: {Name:mk5572fbbbec76974657dfc241ce29b1416d6f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:22:25.390883   15651 start.go:368] acquired machines lock for "kindnet-235000" in 158.459µs
	I0223 13:22:25.390914   15651 start.go:93] Provisioning new machine with config: &{Name:kindnet-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kindnet-235000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:22:25.390997   15651 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:22:25.434387   15651 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:22:25.434835   15651 start.go:159] libmachine.API.Create for "kindnet-235000" (driver="docker")
	I0223 13:22:25.434878   15651 client.go:168] LocalClient.Create starting
	I0223 13:22:25.435108   15651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:22:25.435223   15651 main.go:141] libmachine: Decoding PEM data...
	I0223 13:22:25.435302   15651 main.go:141] libmachine: Parsing certificate...
	I0223 13:22:25.435423   15651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:22:25.435489   15651 main.go:141] libmachine: Decoding PEM data...
	I0223 13:22:25.435506   15651 main.go:141] libmachine: Parsing certificate...
	I0223 13:22:25.436326   15651 cli_runner.go:164] Run: docker network inspect kindnet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:22:25.491617   15651 cli_runner.go:211] docker network inspect kindnet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:22:25.491706   15651 network_create.go:281] running [docker network inspect kindnet-235000] to gather additional debugging logs...
	I0223 13:22:25.491727   15651 cli_runner.go:164] Run: docker network inspect kindnet-235000
	W0223 13:22:25.546222   15651 cli_runner.go:211] docker network inspect kindnet-235000 returned with exit code 1
	I0223 13:22:25.546247   15651 network_create.go:284] error running [docker network inspect kindnet-235000]: docker network inspect kindnet-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-235000
	I0223 13:22:25.546258   15651 network_create.go:286] output of [docker network inspect kindnet-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-235000
	
	** /stderr **
	I0223 13:22:25.546349   15651 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:22:25.603329   15651 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:25.603657   15651 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00005b5e0}
	I0223 13:22:25.603671   15651 network_create.go:123] attempt to create docker network kindnet-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:22:25.603746   15651 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-235000 kindnet-235000
	W0223 13:22:25.659939   15651 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-235000 kindnet-235000 returned with exit code 1
	W0223 13:22:25.659982   15651 network_create.go:148] failed to create docker network kindnet-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-235000 kindnet-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:22:25.660002   15651 network_create.go:115] failed to create docker network kindnet-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:22:25.661386   15651 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:25.661704   15651 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000714840}
	I0223 13:22:25.661721   15651 network_create.go:123] attempt to create docker network kindnet-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:22:25.661786   15651 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-235000 kindnet-235000
	I0223 13:22:25.748696   15651 network_create.go:107] docker network kindnet-235000 192.168.67.0/24 created
	I0223 13:22:25.748739   15651 kic.go:117] calculated static IP "192.168.67.2" for the "kindnet-235000" container
	I0223 13:22:25.748855   15651 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:22:25.807140   15651 cli_runner.go:164] Run: docker volume create kindnet-235000 --label name.minikube.sigs.k8s.io=kindnet-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:22:25.862410   15651 oci.go:103] Successfully created a docker volume kindnet-235000
	I0223 13:22:25.862533   15651 cli_runner.go:164] Run: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:22:26.086790   15651 cli_runner.go:211] docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:22:26.086839   15651 client.go:171] LocalClient.Create took 651.950072ms
	I0223 13:22:28.087122   15651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:22:28.087201   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:28.142474   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:28.142604   15651 retry.go:31] will retry after 229.24989ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:28.372374   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:28.428386   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:28.428470   15651 retry.go:31] will retry after 379.631507ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:28.809478   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:28.864835   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:28.864919   15651 retry.go:31] will retry after 804.865246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:29.670548   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:29.727431   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	W0223 13:22:29.727528   15651 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	
	W0223 13:22:29.727546   15651 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:29.727606   15651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:22:29.727656   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:29.782408   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:29.782502   15651 retry.go:31] will retry after 373.25368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:30.156516   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:30.212200   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:30.212290   15651 retry.go:31] will retry after 293.5776ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:30.507501   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:30.563269   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:30.563354   15651 retry.go:31] will retry after 484.580924ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:31.048082   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:31.103069   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	W0223 13:22:31.103163   15651 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	
	W0223 13:22:31.103179   15651 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:31.103185   15651 start.go:128] duration metric: createHost completed in 5.712169901s
	I0223 13:22:31.103191   15651 start.go:83] releasing machines lock for "kindnet-235000", held for 5.712287566s
	W0223 13:22:31.103207   15651 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for kindnet-235000 container: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:22:31.103634   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:31.159056   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:31.159113   15651 delete.go:82] Unable to get host status for kindnet-235000, assuming it has already been deleted: state: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	W0223 13:22:31.159262   15651 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kindnet-235000 container: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kindnet-235000 container: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:22:31.159275   15651 start.go:706] Will try again in 5 seconds ...
	I0223 13:22:36.159467   15651 start.go:364] acquiring machines lock for kindnet-235000: {Name:mk5572fbbbec76974657dfc241ce29b1416d6f3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:22:36.159627   15651 start.go:368] acquired machines lock for "kindnet-235000" in 126.293µs
	I0223 13:22:36.159668   15651 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:22:36.159681   15651 fix.go:55] fixHost starting: 
	I0223 13:22:36.160113   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:36.216163   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:36.216205   15651 fix.go:103] recreateIfNeeded on kindnet-235000: state= err=unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:36.216227   15651 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:22:36.238212   15651 out.go:177] * docker "kindnet-235000" container is missing, will recreate.
	I0223 13:22:36.281723   15651 delete.go:124] DEMOLISHING kindnet-235000 ...
	I0223 13:22:36.281886   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:36.336212   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:22:36.336257   15651 stop.go:75] unable to get state: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:36.336274   15651 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:36.336653   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:36.391430   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:36.391480   15651 delete.go:82] Unable to get host status for kindnet-235000, assuming it has already been deleted: state: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:36.391564   15651 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-235000
	W0223 13:22:36.445725   15651 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-235000 returned with exit code 1
	I0223 13:22:36.445757   15651 kic.go:367] could not find the container kindnet-235000 to remove it. will try anyways
	I0223 13:22:36.445830   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:36.499486   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:22:36.499527   15651 oci.go:84] error getting container status, will try to delete anyways: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:36.499607   15651 cli_runner.go:164] Run: docker exec --privileged -t kindnet-235000 /bin/bash -c "sudo init 0"
	W0223 13:22:36.553605   15651 cli_runner.go:211] docker exec --privileged -t kindnet-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:22:36.553637   15651 oci.go:641] error shutdown kindnet-235000: docker exec --privileged -t kindnet-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:37.555629   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:37.643509   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:37.643554   15651 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:37.643563   15651 oci.go:655] temporary error: container kindnet-235000 status is  but expect it to be exited
	I0223 13:22:37.643583   15651 retry.go:31] will retry after 451.670532ms: couldn't verify container is exited. %v: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:38.095658   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:38.151595   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:38.151639   15651 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:38.151649   15651 oci.go:655] temporary error: container kindnet-235000 status is  but expect it to be exited
	I0223 13:22:38.151669   15651 retry.go:31] will retry after 862.017925ms: couldn't verify container is exited. %v: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:39.015317   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:39.069756   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:39.069814   15651 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:39.069825   15651 oci.go:655] temporary error: container kindnet-235000 status is  but expect it to be exited
	I0223 13:22:39.069856   15651 retry.go:31] will retry after 794.094915ms: couldn't verify container is exited. %v: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:39.865637   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:39.923959   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:39.924002   15651 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:39.924010   15651 oci.go:655] temporary error: container kindnet-235000 status is  but expect it to be exited
	I0223 13:22:39.924031   15651 retry.go:31] will retry after 1.551305367s: couldn't verify container is exited. %v: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:41.475800   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:41.531726   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:41.531774   15651 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:41.531782   15651 oci.go:655] temporary error: container kindnet-235000 status is  but expect it to be exited
	I0223 13:22:41.531807   15651 retry.go:31] will retry after 2.959713231s: couldn't verify container is exited. %v: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:44.493448   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:44.550263   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:44.550306   15651 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:44.550323   15651 oci.go:655] temporary error: container kindnet-235000 status is  but expect it to be exited
	I0223 13:22:44.550345   15651 retry.go:31] will retry after 2.475440164s: couldn't verify container is exited. %v: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:47.027494   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:47.083677   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:47.083721   15651 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:47.083729   15651 oci.go:655] temporary error: container kindnet-235000 status is  but expect it to be exited
	I0223 13:22:47.083750   15651 retry.go:31] will retry after 6.627987393s: couldn't verify container is exited. %v: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:53.713722   15651 cli_runner.go:164] Run: docker container inspect kindnet-235000 --format={{.State.Status}}
	W0223 13:22:53.768946   15651 cli_runner.go:211] docker container inspect kindnet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:22:53.768990   15651 oci.go:653] temporary error verifying shutdown: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:53.768997   15651 oci.go:655] temporary error: container kindnet-235000 status is  but expect it to be exited
	I0223 13:22:53.769023   15651 oci.go:88] couldn't shut down kindnet-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kindnet-235000": docker container inspect kindnet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	 
	I0223 13:22:53.769104   15651 cli_runner.go:164] Run: docker rm -f -v kindnet-235000
	I0223 13:22:53.825845   15651 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kindnet-235000
	W0223 13:22:53.880417   15651 cli_runner.go:211] docker container inspect -f {{.Id}} kindnet-235000 returned with exit code 1
	I0223 13:22:53.880528   15651 cli_runner.go:164] Run: docker network inspect kindnet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:22:53.935222   15651 cli_runner.go:164] Run: docker network rm kindnet-235000
	W0223 13:22:54.035901   15651 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:22:54.035923   15651 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:22:55.037366   15651 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:22:55.059069   15651 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:22:55.059258   15651 start.go:159] libmachine.API.Create for "kindnet-235000" (driver="docker")
	I0223 13:22:55.059297   15651 client.go:168] LocalClient.Create starting
	I0223 13:22:55.059497   15651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:22:55.059624   15651 main.go:141] libmachine: Decoding PEM data...
	I0223 13:22:55.059667   15651 main.go:141] libmachine: Parsing certificate...
	I0223 13:22:55.059774   15651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:22:55.059858   15651 main.go:141] libmachine: Decoding PEM data...
	I0223 13:22:55.059876   15651 main.go:141] libmachine: Parsing certificate...
	I0223 13:22:55.081472   15651 cli_runner.go:164] Run: docker network inspect kindnet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:22:55.138787   15651 cli_runner.go:211] docker network inspect kindnet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:22:55.138876   15651 network_create.go:281] running [docker network inspect kindnet-235000] to gather additional debugging logs...
	I0223 13:22:55.138895   15651 cli_runner.go:164] Run: docker network inspect kindnet-235000
	W0223 13:22:55.194277   15651 cli_runner.go:211] docker network inspect kindnet-235000 returned with exit code 1
	I0223 13:22:55.194304   15651 network_create.go:284] error running [docker network inspect kindnet-235000]: docker network inspect kindnet-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kindnet-235000
	I0223 13:22:55.194321   15651 network_create.go:286] output of [docker network inspect kindnet-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kindnet-235000
	
	** /stderr **
	I0223 13:22:55.194403   15651 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:22:55.249866   15651 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:55.251366   15651 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:55.252847   15651 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:55.253146   15651 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010efa70}
	I0223 13:22:55.253157   15651 network_create.go:123] attempt to create docker network kindnet-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:22:55.253224   15651 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-235000 kindnet-235000
	W0223 13:22:55.308479   15651 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-235000 kindnet-235000 returned with exit code 1
	W0223 13:22:55.308508   15651 network_create.go:148] failed to create docker network kindnet-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-235000 kindnet-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:22:55.308521   15651 network_create.go:115] failed to create docker network kindnet-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:22:55.309837   15651 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:22:55.310165   15651 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011ee060}
	I0223 13:22:55.310176   15651 network_create.go:123] attempt to create docker network kindnet-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:22:55.310246   15651 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-235000 kindnet-235000
	I0223 13:22:55.398687   15651 network_create.go:107] docker network kindnet-235000 192.168.85.0/24 created
	I0223 13:22:55.398726   15651 kic.go:117] calculated static IP "192.168.85.2" for the "kindnet-235000" container
	I0223 13:22:55.398835   15651 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:22:55.456368   15651 cli_runner.go:164] Run: docker volume create kindnet-235000 --label name.minikube.sigs.k8s.io=kindnet-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:22:55.510126   15651 oci.go:103] Successfully created a docker volume kindnet-235000
	I0223 13:22:55.510243   15651 cli_runner.go:164] Run: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:22:55.644856   15651 cli_runner.go:211] docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:22:55.644897   15651 client.go:171] LocalClient.Create took 585.591254ms
	I0223 13:22:57.646535   15651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:22:57.646660   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:57.702307   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:57.702395   15651 retry.go:31] will retry after 149.164063ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:57.852966   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:57.908767   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:57.908888   15651 retry.go:31] will retry after 472.710393ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:58.382840   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:58.438921   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:58.439008   15651 retry.go:31] will retry after 604.052651ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:59.044211   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:59.099484   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	W0223 13:22:59.099597   15651 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	
	W0223 13:22:59.099616   15651 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:59.099690   15651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:22:59.099749   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:59.155682   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:59.155770   15651 retry.go:31] will retry after 302.424604ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:59.459022   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:59.514574   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:59.514674   15651 retry.go:31] will retry after 277.470692ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:22:59.793769   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:22:59.849958   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:22:59.850044   15651 retry.go:31] will retry after 432.535208ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:00.283100   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:00.339185   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:23:00.339275   15651 retry.go:31] will retry after 622.481934ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:00.962544   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:01.018667   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	W0223 13:23:01.018767   15651 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	
	W0223 13:23:01.018784   15651 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:01.018789   15651 start.go:128] duration metric: createHost completed in 5.981387306s
	I0223 13:23:01.018868   15651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:23:01.018925   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:01.073911   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:23:01.074001   15651 retry.go:31] will retry after 158.639038ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:01.234046   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:01.291050   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:23:01.291134   15651 retry.go:31] will retry after 468.590256ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:01.761435   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:01.817935   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:23:01.818021   15651 retry.go:31] will retry after 836.098787ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:02.655206   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:02.711480   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	W0223 13:23:02.711569   15651 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	
	W0223 13:23:02.711590   15651 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:02.711671   15651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:23:02.711723   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:02.765589   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:23:02.765688   15651 retry.go:31] will retry after 255.982379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:03.023944   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:03.079708   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:23:03.079803   15651 retry.go:31] will retry after 488.681639ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:03.569660   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:03.625338   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	I0223 13:23:03.625420   15651 retry.go:31] will retry after 482.585712ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:04.108672   15651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000
	W0223 13:23:04.165266   15651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000 returned with exit code 1
	W0223 13:23:04.165362   15651 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	
	W0223 13:23:04.165387   15651 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kindnet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kindnet-235000
	I0223 13:23:04.165392   15651 fix.go:57] fixHost completed within 28.005646434s
	I0223 13:23:04.165400   15651 start.go:83] releasing machines lock for "kindnet-235000", held for 28.005694035s
	W0223 13:23:04.165535   15651 out.go:239] * Failed to start docker container. Running "minikube delete -p kindnet-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kindnet-235000 container: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p kindnet-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kindnet-235000 container: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:23:04.208948   15651 out.go:177] 
	W0223 13:23:04.230038   15651 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kindnet-235000 container: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kindnet-235000 container: docker run --rm --name kindnet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-235000 --entrypoint /usr/bin/test -v kindnet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:23:04.230064   15651 out.go:239] * 
	* 
	W0223 13:23:04.231456   15651 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:23:04.315921   15651 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kindnet/Start (39.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (38.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p calico-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker : exit status 80 (38.090873769s)

                                                
                                                
-- stdout --
	* [calico-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node calico-235000 in cluster calico-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "calico-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:23:12.666647   16104 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:23:12.666790   16104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:23:12.666795   16104 out.go:309] Setting ErrFile to fd 2...
	I0223 13:23:12.666799   16104 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:23:12.666912   16104 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:23:12.668260   16104 out.go:303] Setting JSON to false
	I0223 13:23:12.686676   16104 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3167,"bootTime":1677184225,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:23:12.686754   16104 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:23:12.708506   16104 out.go:177] * [calico-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:23:12.750941   16104 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:23:12.750932   16104 notify.go:220] Checking for updates...
	I0223 13:23:12.772914   16104 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:23:12.794557   16104 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:23:12.815658   16104 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:23:12.837310   16104 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:23:12.858727   16104 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:23:12.882108   16104 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:23:12.882260   16104 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:23:12.882364   16104 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:23:12.882415   16104 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:23:12.943189   16104 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:23:12.943331   16104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:23:13.085700   16104 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:23:12.993792269 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:23:13.129128   16104 out.go:177] * Using the docker driver based on user configuration
	I0223 13:23:13.152094   16104 start.go:296] selected driver: docker
	I0223 13:23:13.152119   16104 start.go:857] validating driver "docker" against <nil>
	I0223 13:23:13.152137   16104 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:23:13.155701   16104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:23:13.297902   16104 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:23:13.206374436 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:23:13.298027   16104 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:23:13.298217   16104 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:23:13.319675   16104 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:23:13.340583   16104 cni.go:84] Creating CNI manager for "calico"
	I0223 13:23:13.340608   16104 start_flags.go:314] Found "Calico" CNI - setting NetworkPlugin=cni
	I0223 13:23:13.340626   16104 start_flags.go:319] config:
	{Name:calico-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:calico-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:23:13.383448   16104 out.go:177] * Starting control plane node calico-235000 in cluster calico-235000
	I0223 13:23:13.404599   16104 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:23:13.425707   16104 out.go:177] * Pulling base image ...
	I0223 13:23:13.467471   16104 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:23:13.467536   16104 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:23:13.467556   16104 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:23:13.467578   16104 cache.go:57] Caching tarball of preloaded images
	I0223 13:23:13.467831   16104 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:23:13.467851   16104 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:23:13.468899   16104 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/calico-235000/config.json ...
	I0223 13:23:13.469056   16104 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/calico-235000/config.json: {Name:mkeee6cc804437d2e016bed9db572942b54d54ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:23:13.525253   16104 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:23:13.525271   16104 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:23:13.525299   16104 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:23:13.525356   16104 start.go:364] acquiring machines lock for calico-235000: {Name:mk5f3ed6bf39467fd63564af9d6c3c81e3cce8b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:23:13.525528   16104 start.go:368] acquired machines lock for "calico-235000" in 159.382µs
	I0223 13:23:13.525563   16104 start.go:93] Provisioning new machine with config: &{Name:calico-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:calico-235000 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:23:13.525628   16104 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:23:13.569097   16104 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:23:13.569518   16104 start.go:159] libmachine.API.Create for "calico-235000" (driver="docker")
	I0223 13:23:13.569571   16104 client.go:168] LocalClient.Create starting
	I0223 13:23:13.569836   16104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:23:13.569931   16104 main.go:141] libmachine: Decoding PEM data...
	I0223 13:23:13.569967   16104 main.go:141] libmachine: Parsing certificate...
	I0223 13:23:13.570073   16104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:23:13.570139   16104 main.go:141] libmachine: Decoding PEM data...
	I0223 13:23:13.570157   16104 main.go:141] libmachine: Parsing certificate...
	I0223 13:23:13.571015   16104 cli_runner.go:164] Run: docker network inspect calico-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:23:13.627072   16104 cli_runner.go:211] docker network inspect calico-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:23:13.627180   16104 network_create.go:281] running [docker network inspect calico-235000] to gather additional debugging logs...
	I0223 13:23:13.627204   16104 cli_runner.go:164] Run: docker network inspect calico-235000
	W0223 13:23:13.681956   16104 cli_runner.go:211] docker network inspect calico-235000 returned with exit code 1
	I0223 13:23:13.681983   16104 network_create.go:284] error running [docker network inspect calico-235000]: docker network inspect calico-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-235000
	I0223 13:23:13.681992   16104 network_create.go:286] output of [docker network inspect calico-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-235000
	
	** /stderr **
	I0223 13:23:13.682083   16104 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:23:13.738519   16104 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:23:13.738886   16104 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000e85ee0}
	I0223 13:23:13.738899   16104 network_create.go:123] attempt to create docker network calico-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:23:13.738972   16104 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-235000 calico-235000
	W0223 13:23:13.793572   16104 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-235000 calico-235000 returned with exit code 1
	W0223 13:23:13.793600   16104 network_create.go:148] failed to create docker network calico-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-235000 calico-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:23:13.793614   16104 network_create.go:115] failed to create docker network calico-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:23:13.794958   16104 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:23:13.795276   16104 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010f4390}
	I0223 13:23:13.795286   16104 network_create.go:123] attempt to create docker network calico-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:23:13.795366   16104 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-235000 calico-235000
	I0223 13:23:13.881793   16104 network_create.go:107] docker network calico-235000 192.168.67.0/24 created
	I0223 13:23:13.881825   16104 kic.go:117] calculated static IP "192.168.67.2" for the "calico-235000" container
	I0223 13:23:13.881951   16104 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:23:13.939780   16104 cli_runner.go:164] Run: docker volume create calico-235000 --label name.minikube.sigs.k8s.io=calico-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:23:13.995383   16104 oci.go:103] Successfully created a docker volume calico-235000
	I0223 13:23:13.995507   16104 cli_runner.go:164] Run: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:23:14.218450   16104 cli_runner.go:211] docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:23:14.218497   16104 client.go:171] LocalClient.Create took 648.916372ms
	I0223 13:23:16.219641   16104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:23:16.219766   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:16.275523   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:16.275657   16104 retry.go:31] will retry after 360.461431ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:16.637663   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:16.693904   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:16.693989   16104 retry.go:31] will retry after 212.467201ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:16.907678   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:16.963506   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:16.963593   16104 retry.go:31] will retry after 578.178016ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:17.543622   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:17.598722   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	W0223 13:23:17.598814   16104 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	
	W0223 13:23:17.598833   16104 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:17.598894   16104 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:23:17.598941   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:17.654060   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:17.654157   16104 retry.go:31] will retry after 235.436411ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:17.891085   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:17.950567   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:17.950652   16104 retry.go:31] will retry after 302.052586ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:18.253726   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:18.309215   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:18.309303   16104 retry.go:31] will retry after 826.165639ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:19.137226   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:19.216585   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	W0223 13:23:19.216735   16104 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	
	W0223 13:23:19.216762   16104 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:19.216780   16104 start.go:128] duration metric: createHost completed in 5.691134085s
	I0223 13:23:19.216794   16104 start.go:83] releasing machines lock for "calico-235000", held for 5.691241468s
	W0223 13:23:19.216821   16104 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for calico-235000 container: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:23:19.217571   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:19.277917   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:19.277965   16104 delete.go:82] Unable to get host status for calico-235000, assuming it has already been deleted: state: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	W0223 13:23:19.278110   16104 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for calico-235000 container: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for calico-235000 container: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:23:19.278122   16104 start.go:706] Will try again in 5 seconds ...
	I0223 13:23:24.279578   16104 start.go:364] acquiring machines lock for calico-235000: {Name:mk5f3ed6bf39467fd63564af9d6c3c81e3cce8b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:23:24.279734   16104 start.go:368] acquired machines lock for "calico-235000" in 123.799µs
	I0223 13:23:24.279772   16104 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:23:24.279786   16104 fix.go:55] fixHost starting: 
	I0223 13:23:24.280219   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:24.337851   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:24.337899   16104 fix.go:103] recreateIfNeeded on calico-235000: state= err=unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:24.337917   16104 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:23:24.359687   16104 out.go:177] * docker "calico-235000" container is missing, will recreate.
	I0223 13:23:24.403497   16104 delete.go:124] DEMOLISHING calico-235000 ...
	I0223 13:23:24.403742   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:24.460284   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:23:24.460325   16104 stop.go:75] unable to get state: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:24.460343   16104 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:24.460711   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:24.514904   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:24.514951   16104 delete.go:82] Unable to get host status for calico-235000, assuming it has already been deleted: state: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:24.515038   16104 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-235000
	W0223 13:23:24.570957   16104 cli_runner.go:211] docker container inspect -f {{.Id}} calico-235000 returned with exit code 1
	I0223 13:23:24.570990   16104 kic.go:367] could not find the container calico-235000 to remove it. will try anyways
	I0223 13:23:24.571060   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:24.625443   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:23:24.625493   16104 oci.go:84] error getting container status, will try to delete anyways: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:24.625584   16104 cli_runner.go:164] Run: docker exec --privileged -t calico-235000 /bin/bash -c "sudo init 0"
	W0223 13:23:24.679356   16104 cli_runner.go:211] docker exec --privileged -t calico-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:23:24.679393   16104 oci.go:641] error shutdown calico-235000: docker exec --privileged -t calico-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:25.679661   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:25.737273   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:25.737316   16104 oci.go:653] temporary error verifying shutdown: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:25.737325   16104 oci.go:655] temporary error: container calico-235000 status is  but expect it to be exited
	I0223 13:23:25.737346   16104 retry.go:31] will retry after 723.245099ms: couldn't verify container is exited. %v: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:26.460973   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:26.518201   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:26.518245   16104 oci.go:653] temporary error verifying shutdown: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:26.518255   16104 oci.go:655] temporary error: container calico-235000 status is  but expect it to be exited
	I0223 13:23:26.518276   16104 retry.go:31] will retry after 414.351769ms: couldn't verify container is exited. %v: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:26.934421   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:26.990688   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:26.990734   16104 oci.go:653] temporary error verifying shutdown: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:26.990743   16104 oci.go:655] temporary error: container calico-235000 status is  but expect it to be exited
	I0223 13:23:26.990762   16104 retry.go:31] will retry after 1.130523755s: couldn't verify container is exited. %v: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:28.121690   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:28.176613   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:28.176656   16104 oci.go:653] temporary error verifying shutdown: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:28.176665   16104 oci.go:655] temporary error: container calico-235000 status is  but expect it to be exited
	I0223 13:23:28.176685   16104 retry.go:31] will retry after 2.0311206s: couldn't verify container is exited. %v: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:30.209092   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:30.266759   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:30.266803   16104 oci.go:653] temporary error verifying shutdown: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:30.266812   16104 oci.go:655] temporary error: container calico-235000 status is  but expect it to be exited
	I0223 13:23:30.266832   16104 retry.go:31] will retry after 2.558401495s: couldn't verify container is exited. %v: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:32.825802   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:32.883640   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:32.883685   16104 oci.go:653] temporary error verifying shutdown: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:32.883694   16104 oci.go:655] temporary error: container calico-235000 status is  but expect it to be exited
	I0223 13:23:32.883714   16104 retry.go:31] will retry after 2.679720202s: couldn't verify container is exited. %v: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:35.563920   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:35.620017   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:35.620154   16104 oci.go:653] temporary error verifying shutdown: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:35.620169   16104 oci.go:655] temporary error: container calico-235000 status is  but expect it to be exited
	I0223 13:23:35.620188   16104 retry.go:31] will retry after 4.941841228s: couldn't verify container is exited. %v: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:40.563678   16104 cli_runner.go:164] Run: docker container inspect calico-235000 --format={{.State.Status}}
	W0223 13:23:40.619637   16104 cli_runner.go:211] docker container inspect calico-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:23:40.619687   16104 oci.go:653] temporary error verifying shutdown: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:40.619694   16104 oci.go:655] temporary error: container calico-235000 status is  but expect it to be exited
	I0223 13:23:40.619719   16104 oci.go:88] couldn't shut down calico-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "calico-235000": docker container inspect calico-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	 
	I0223 13:23:40.619799   16104 cli_runner.go:164] Run: docker rm -f -v calico-235000
	I0223 13:23:40.675866   16104 cli_runner.go:164] Run: docker container inspect -f {{.Id}} calico-235000
	W0223 13:23:40.729904   16104 cli_runner.go:211] docker container inspect -f {{.Id}} calico-235000 returned with exit code 1
	I0223 13:23:40.730012   16104 cli_runner.go:164] Run: docker network inspect calico-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:23:40.785785   16104 cli_runner.go:164] Run: docker network rm calico-235000
	W0223 13:23:40.900984   16104 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:23:40.901003   16104 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:23:41.901936   16104 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:23:41.924098   16104 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:23:41.924280   16104 start.go:159] libmachine.API.Create for "calico-235000" (driver="docker")
	I0223 13:23:41.924319   16104 client.go:168] LocalClient.Create starting
	I0223 13:23:41.924523   16104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:23:41.924621   16104 main.go:141] libmachine: Decoding PEM data...
	I0223 13:23:41.924646   16104 main.go:141] libmachine: Parsing certificate...
	I0223 13:23:41.924743   16104 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:23:41.924808   16104 main.go:141] libmachine: Decoding PEM data...
	I0223 13:23:41.924825   16104 main.go:141] libmachine: Parsing certificate...
	I0223 13:23:41.946102   16104 cli_runner.go:164] Run: docker network inspect calico-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:23:42.002525   16104 cli_runner.go:211] docker network inspect calico-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:23:42.002620   16104 network_create.go:281] running [docker network inspect calico-235000] to gather additional debugging logs...
	I0223 13:23:42.002636   16104 cli_runner.go:164] Run: docker network inspect calico-235000
	W0223 13:23:42.057191   16104 cli_runner.go:211] docker network inspect calico-235000 returned with exit code 1
	I0223 13:23:42.057216   16104 network_create.go:284] error running [docker network inspect calico-235000]: docker network inspect calico-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-235000
	I0223 13:23:42.057226   16104 network_create.go:286] output of [docker network inspect calico-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-235000
	
	** /stderr **
	I0223 13:23:42.057319   16104 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:23:42.114532   16104 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:23:42.116025   16104 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:23:42.117340   16104 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:23:42.117612   16104 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001137500}
	I0223 13:23:42.117624   16104 network_create.go:123] attempt to create docker network calico-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:23:42.117689   16104 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-235000 calico-235000
	W0223 13:23:42.174635   16104 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-235000 calico-235000 returned with exit code 1
	W0223 13:23:42.174666   16104 network_create.go:148] failed to create docker network calico-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-235000 calico-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:23:42.174680   16104 network_create.go:115] failed to create docker network calico-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:23:42.175990   16104 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:23:42.176310   16104 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011c0cb0}
	I0223 13:23:42.176325   16104 network_create.go:123] attempt to create docker network calico-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:23:42.176397   16104 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-235000 calico-235000
	I0223 13:23:42.262028   16104 network_create.go:107] docker network calico-235000 192.168.85.0/24 created
	I0223 13:23:42.262058   16104 kic.go:117] calculated static IP "192.168.85.2" for the "calico-235000" container
	I0223 13:23:42.262166   16104 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:23:42.318792   16104 cli_runner.go:164] Run: docker volume create calico-235000 --label name.minikube.sigs.k8s.io=calico-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:23:42.372324   16104 oci.go:103] Successfully created a docker volume calico-235000
	I0223 13:23:42.372463   16104 cli_runner.go:164] Run: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:23:42.504057   16104 cli_runner.go:211] docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:23:42.504103   16104 client.go:171] LocalClient.Create took 579.773525ms
	I0223 13:23:44.505223   16104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:23:44.505386   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:44.561094   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:44.561176   16104 retry.go:31] will retry after 145.701232ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:44.707819   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:44.764569   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:44.764657   16104 retry.go:31] will retry after 375.796843ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:45.141836   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:45.198658   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:45.198745   16104 retry.go:31] will retry after 817.291725ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:46.018321   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:46.074758   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	W0223 13:23:46.074851   16104 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	
	W0223 13:23:46.074867   16104 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:46.074922   16104 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:23:46.074981   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:46.129254   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:46.129341   16104 retry.go:31] will retry after 302.86563ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:46.432556   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:46.488976   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:46.489096   16104 retry.go:31] will retry after 535.664635ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:47.025302   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:47.081037   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:47.081125   16104 retry.go:31] will retry after 581.09396ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:47.663619   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:47.719323   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	W0223 13:23:47.719421   16104 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	
	W0223 13:23:47.719437   16104 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:47.719442   16104 start.go:128] duration metric: createHost completed in 5.817466528s
	I0223 13:23:47.719512   16104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:23:47.719571   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:47.773885   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:47.773967   16104 retry.go:31] will retry after 193.591884ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:47.969766   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:48.027134   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:48.027215   16104 retry.go:31] will retry after 399.454503ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:48.427802   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:48.483554   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:48.483639   16104 retry.go:31] will retry after 625.819301ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:49.111858   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:49.168681   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	W0223 13:23:49.168778   16104 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	
	W0223 13:23:49.168797   16104 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:49.168861   16104 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:23:49.168909   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:49.224384   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:49.224474   16104 retry.go:31] will retry after 169.82791ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:49.395964   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:49.451608   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:49.451698   16104 retry.go:31] will retry after 531.181828ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:49.983657   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:50.039297   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	I0223 13:23:50.039378   16104 retry.go:31] will retry after 441.691284ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:50.482842   16104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000
	W0223 13:23:50.538938   16104 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000 returned with exit code 1
	W0223 13:23:50.539027   16104 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	
	W0223 13:23:50.539045   16104 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "calico-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: calico-235000
	I0223 13:23:50.539050   16104 fix.go:57] fixHost completed within 26.259203044s
	I0223 13:23:50.539057   16104 start.go:83] releasing machines lock for "calico-235000", held for 26.259249356s
	W0223 13:23:50.539189   16104 out.go:239] * Failed to start docker container. Running "minikube delete -p calico-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for calico-235000 container: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p calico-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for calico-235000 container: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:23:50.582602   16104 out.go:177] 
	W0223 13:23:50.606055   16104 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for calico-235000 container: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for calico-235000 container: docker run --rm --name calico-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-235000 --entrypoint /usr/bin/test -v calico-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:23:50.606086   16104 out.go:239] * 
	* 
	W0223 13:23:50.607448   16104 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:23:50.669735   16104 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (38.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (40.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p custom-flannel-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker : exit status 80 (40.752748417s)

                                                
                                                
-- stdout --
	* [custom-flannel-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node custom-flannel-235000 in cluster custom-flannel-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "custom-flannel-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:23:59.220293   16524 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:23:59.220453   16524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:23:59.220458   16524 out.go:309] Setting ErrFile to fd 2...
	I0223 13:23:59.220462   16524 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:23:59.220566   16524 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:23:59.221929   16524 out.go:303] Setting JSON to false
	I0223 13:23:59.240245   16524 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3214,"bootTime":1677184225,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:23:59.240333   16524 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:23:59.261507   16524 out.go:177] * [custom-flannel-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:23:59.303746   16524 notify.go:220] Checking for updates...
	I0223 13:23:59.303767   16524 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:23:59.325759   16524 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:23:59.347527   16524 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:23:59.368879   16524 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:23:59.390616   16524 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:23:59.411339   16524 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:23:59.433102   16524 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:23:59.433288   16524 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:23:59.433427   16524 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:23:59.433492   16524 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:23:59.496328   16524 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:23:59.496458   16524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:23:59.639395   16524 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:23:59.546596162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:23:59.682864   16524 out.go:177] * Using the docker driver based on user configuration
	I0223 13:23:59.704130   16524 start.go:296] selected driver: docker
	I0223 13:23:59.704162   16524 start.go:857] validating driver "docker" against <nil>
	I0223 13:23:59.704183   16524 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:23:59.708076   16524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:23:59.849716   16524 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:23:59.757506542 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:23:59.849840   16524 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:23:59.850019   16524 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:23:59.871654   16524 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:23:59.893343   16524 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0223 13:23:59.893415   16524 start_flags.go:314] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I0223 13:23:59.893441   16524 start_flags.go:319] config:
	{Name:custom-flannel-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:custom-flannel-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:23:59.936444   16524 out.go:177] * Starting control plane node custom-flannel-235000 in cluster custom-flannel-235000
	I0223 13:23:59.957329   16524 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:23:59.978537   16524 out.go:177] * Pulling base image ...
	I0223 13:24:00.020442   16524 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:24:00.020476   16524 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:24:00.020532   16524 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:24:00.020552   16524 cache.go:57] Caching tarball of preloaded images
	I0223 13:24:00.020751   16524 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:24:00.020770   16524 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:24:00.021500   16524 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/custom-flannel-235000/config.json ...
	I0223 13:24:00.021758   16524 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/custom-flannel-235000/config.json: {Name:mk3694ab5359f2ec580411bed2b5eb124567eb1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:24:00.078191   16524 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:24:00.078220   16524 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:24:00.078240   16524 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:24:00.078302   16524 start.go:364] acquiring machines lock for custom-flannel-235000: {Name:mk2884cbfb37351a8ec28dfb3a5d9feb8419018c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:24:00.078462   16524 start.go:368] acquired machines lock for "custom-flannel-235000" in 146.87µs
	I0223 13:24:00.078499   16524 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:custom-flannel-235000 Namespace:default APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryM
irror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:24:00.078581   16524 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:24:00.122985   16524 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:24:00.123415   16524 start.go:159] libmachine.API.Create for "custom-flannel-235000" (driver="docker")
	I0223 13:24:00.123451   16524 client.go:168] LocalClient.Create starting
	I0223 13:24:00.123630   16524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:24:00.123719   16524 main.go:141] libmachine: Decoding PEM data...
	I0223 13:24:00.123757   16524 main.go:141] libmachine: Parsing certificate...
	I0223 13:24:00.123884   16524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:24:00.123959   16524 main.go:141] libmachine: Decoding PEM data...
	I0223 13:24:00.123977   16524 main.go:141] libmachine: Parsing certificate...
	I0223 13:24:00.124815   16524 cli_runner.go:164] Run: docker network inspect custom-flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:24:00.179756   16524 cli_runner.go:211] docker network inspect custom-flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:24:00.179845   16524 network_create.go:281] running [docker network inspect custom-flannel-235000] to gather additional debugging logs...
	I0223 13:24:00.179861   16524 cli_runner.go:164] Run: docker network inspect custom-flannel-235000
	W0223 13:24:00.234198   16524 cli_runner.go:211] docker network inspect custom-flannel-235000 returned with exit code 1
	I0223 13:24:00.234221   16524 network_create.go:284] error running [docker network inspect custom-flannel-235000]: docker network inspect custom-flannel-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-flannel-235000
	I0223 13:24:00.234236   16524 network_create.go:286] output of [docker network inspect custom-flannel-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-flannel-235000
	
	** /stderr **
	I0223 13:24:00.234325   16524 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:24:00.291230   16524 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:24:00.291548   16524 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001291b60}
	I0223 13:24:00.291571   16524 network_create.go:123] attempt to create docker network custom-flannel-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:24:00.291645   16524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-235000 custom-flannel-235000
	W0223 13:24:00.346089   16524 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-235000 custom-flannel-235000 returned with exit code 1
	W0223 13:24:00.346117   16524 network_create.go:148] failed to create docker network custom-flannel-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-235000 custom-flannel-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:24:00.346131   16524 network_create.go:115] failed to create docker network custom-flannel-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:24:00.347618   16524 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:24:00.347938   16524 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00074e020}
	I0223 13:24:00.347948   16524 network_create.go:123] attempt to create docker network custom-flannel-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:24:00.348017   16524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-235000 custom-flannel-235000
	I0223 13:24:00.436014   16524 network_create.go:107] docker network custom-flannel-235000 192.168.67.0/24 created
	I0223 13:24:00.436054   16524 kic.go:117] calculated static IP "192.168.67.2" for the "custom-flannel-235000" container
	I0223 13:24:00.436180   16524 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:24:00.493235   16524 cli_runner.go:164] Run: docker volume create custom-flannel-235000 --label name.minikube.sigs.k8s.io=custom-flannel-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:24:00.547823   16524 oci.go:103] Successfully created a docker volume custom-flannel-235000
	I0223 13:24:00.547954   16524 cli_runner.go:164] Run: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:24:00.764546   16524 cli_runner.go:211] docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:24:00.764589   16524 client.go:171] LocalClient.Create took 641.127661ms
	I0223 13:24:02.765623   16524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:24:02.765775   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:02.822975   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:02.823098   16524 retry.go:31] will retry after 325.434474ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:03.149660   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:03.204970   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:03.205063   16524 retry.go:31] will retry after 561.824081ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:03.768873   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:03.824660   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:03.824751   16524 retry.go:31] will retry after 742.002214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:04.567237   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:04.624466   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	W0223 13:24:04.624566   16524 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	
	W0223 13:24:04.624583   16524 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:04.624645   16524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:24:04.624688   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:04.678557   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:04.678663   16524 retry.go:31] will retry after 182.278532ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:04.861445   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:04.918984   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:04.919070   16524 retry.go:31] will retry after 232.484036ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:05.151915   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:05.208857   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:05.208945   16524 retry.go:31] will retry after 634.209493ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:05.844050   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:05.900058   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	W0223 13:24:05.900147   16524 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	
	W0223 13:24:05.900160   16524 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:05.900166   16524 start.go:128] duration metric: createHost completed in 5.821559964s
	I0223 13:24:05.900173   16524 start.go:83] releasing machines lock for "custom-flannel-235000", held for 5.821690306s
	W0223 13:24:05.900188   16524 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for custom-flannel-235000 container: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:24:05.900625   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:05.955711   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:05.955764   16524 delete.go:82] Unable to get host status for custom-flannel-235000, assuming it has already been deleted: state: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	W0223 13:24:05.955891   16524 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for custom-flannel-235000 container: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for custom-flannel-235000 container: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:24:05.955900   16524 start.go:706] Will try again in 5 seconds ...
	I0223 13:24:10.958136   16524 start.go:364] acquiring machines lock for custom-flannel-235000: {Name:mk2884cbfb37351a8ec28dfb3a5d9feb8419018c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:24:10.958359   16524 start.go:368] acquired machines lock for "custom-flannel-235000" in 182.522µs
	I0223 13:24:10.958425   16524 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:24:10.958439   16524 fix.go:55] fixHost starting: 
	I0223 13:24:10.958866   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:11.018067   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:11.018114   16524 fix.go:103] recreateIfNeeded on custom-flannel-235000: state= err=unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:11.018132   16524 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:24:11.060660   16524 out.go:177] * docker "custom-flannel-235000" container is missing, will recreate.
	I0223 13:24:11.082478   16524 delete.go:124] DEMOLISHING custom-flannel-235000 ...
	I0223 13:24:11.082690   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:11.138430   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:24:11.138476   16524 stop.go:75] unable to get state: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:11.138491   16524 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:11.138877   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:11.193298   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:11.193345   16524 delete.go:82] Unable to get host status for custom-flannel-235000, assuming it has already been deleted: state: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:11.193432   16524 cli_runner.go:164] Run: docker container inspect -f {{.Id}} custom-flannel-235000
	W0223 13:24:11.247776   16524 cli_runner.go:211] docker container inspect -f {{.Id}} custom-flannel-235000 returned with exit code 1
	I0223 13:24:11.247807   16524 kic.go:367] could not find the container custom-flannel-235000 to remove it. will try anyways
	I0223 13:24:11.247884   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:11.301658   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:24:11.301704   16524 oci.go:84] error getting container status, will try to delete anyways: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:11.301785   16524 cli_runner.go:164] Run: docker exec --privileged -t custom-flannel-235000 /bin/bash -c "sudo init 0"
	W0223 13:24:11.356245   16524 cli_runner.go:211] docker exec --privileged -t custom-flannel-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:24:11.356275   16524 oci.go:641] error shutdown custom-flannel-235000: docker exec --privileged -t custom-flannel-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:12.356526   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:12.417700   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:12.417744   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:12.417753   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:12.417772   16524 retry.go:31] will retry after 384.476662ms: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:12.802634   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:12.859560   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:12.859604   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:12.859613   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:12.859634   16524 retry.go:31] will retry after 806.777141ms: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:13.667470   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:13.728184   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:13.728228   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:13.728236   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:13.728266   16524 retry.go:31] will retry after 681.961832ms: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:14.412630   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:14.470984   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:14.471026   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:14.471033   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:14.471054   16524 retry.go:31] will retry after 1.119710179s: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:15.593071   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:15.654302   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:15.654346   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:15.654355   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:15.654374   16524 retry.go:31] will retry after 2.204062986s: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:17.859381   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:17.917907   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:17.917950   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:17.917957   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:17.917977   16524 retry.go:31] will retry after 1.992729697s: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:19.912300   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:19.970811   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:19.970853   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:19.970860   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:19.970881   16524 retry.go:31] will retry after 3.687706625s: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:23.659132   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:23.718735   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:23.718777   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:23.718785   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:23.718805   16524 retry.go:31] will retry after 5.730923001s: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:29.452271   16524 cli_runner.go:164] Run: docker container inspect custom-flannel-235000 --format={{.State.Status}}
	W0223 13:24:29.512529   16524 cli_runner.go:211] docker container inspect custom-flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:29.512570   16524 oci.go:653] temporary error verifying shutdown: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:29.512577   16524 oci.go:655] temporary error: container custom-flannel-235000 status is  but expect it to be exited
	I0223 13:24:29.512604   16524 oci.go:88] couldn't shut down custom-flannel-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "custom-flannel-235000": docker container inspect custom-flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	 
	I0223 13:24:29.512681   16524 cli_runner.go:164] Run: docker rm -f -v custom-flannel-235000
	I0223 13:24:29.571553   16524 cli_runner.go:164] Run: docker container inspect -f {{.Id}} custom-flannel-235000
	W0223 13:24:29.625929   16524 cli_runner.go:211] docker container inspect -f {{.Id}} custom-flannel-235000 returned with exit code 1
	I0223 13:24:29.626039   16524 cli_runner.go:164] Run: docker network inspect custom-flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:24:29.681861   16524 cli_runner.go:164] Run: docker network rm custom-flannel-235000
	W0223 13:24:29.794307   16524 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:24:29.794327   16524 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:24:30.794430   16524 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:24:30.816204   16524 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:24:30.816388   16524 start.go:159] libmachine.API.Create for "custom-flannel-235000" (driver="docker")
	I0223 13:24:30.816420   16524 client.go:168] LocalClient.Create starting
	I0223 13:24:30.816640   16524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:24:30.816727   16524 main.go:141] libmachine: Decoding PEM data...
	I0223 13:24:30.816749   16524 main.go:141] libmachine: Parsing certificate...
	I0223 13:24:30.816845   16524 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:24:30.816910   16524 main.go:141] libmachine: Decoding PEM data...
	I0223 13:24:30.816928   16524 main.go:141] libmachine: Parsing certificate...
	I0223 13:24:30.838736   16524 cli_runner.go:164] Run: docker network inspect custom-flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:24:30.894915   16524 cli_runner.go:211] docker network inspect custom-flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:24:30.895007   16524 network_create.go:281] running [docker network inspect custom-flannel-235000] to gather additional debugging logs...
	I0223 13:24:30.895025   16524 cli_runner.go:164] Run: docker network inspect custom-flannel-235000
	W0223 13:24:30.949594   16524 cli_runner.go:211] docker network inspect custom-flannel-235000 returned with exit code 1
	I0223 13:24:30.949633   16524 network_create.go:284] error running [docker network inspect custom-flannel-235000]: docker network inspect custom-flannel-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: custom-flannel-235000
	I0223 13:24:30.949651   16524 network_create.go:286] output of [docker network inspect custom-flannel-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: custom-flannel-235000
	
	** /stderr **
	I0223 13:24:30.949731   16524 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:24:31.006959   16524 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:24:31.008445   16524 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:24:31.009889   16524 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:24:31.010200   16524 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00107e690}
	I0223 13:24:31.010213   16524 network_create.go:123] attempt to create docker network custom-flannel-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:24:31.010279   16524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-235000 custom-flannel-235000
	W0223 13:24:31.065644   16524 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-235000 custom-flannel-235000 returned with exit code 1
	W0223 13:24:31.065682   16524 network_create.go:148] failed to create docker network custom-flannel-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-235000 custom-flannel-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:24:31.065695   16524 network_create.go:115] failed to create docker network custom-flannel-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:24:31.067002   16524 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:24:31.067319   16524 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00107f4e0}
	I0223 13:24:31.067334   16524 network_create.go:123] attempt to create docker network custom-flannel-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:24:31.067399   16524 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-235000 custom-flannel-235000
	I0223 13:24:31.155301   16524 network_create.go:107] docker network custom-flannel-235000 192.168.85.0/24 created
	I0223 13:24:31.155330   16524 kic.go:117] calculated static IP "192.168.85.2" for the "custom-flannel-235000" container
	I0223 13:24:31.155437   16524 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:24:31.214545   16524 cli_runner.go:164] Run: docker volume create custom-flannel-235000 --label name.minikube.sigs.k8s.io=custom-flannel-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:24:31.270413   16524 oci.go:103] Successfully created a docker volume custom-flannel-235000
	I0223 13:24:31.270539   16524 cli_runner.go:164] Run: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:24:31.408660   16524 cli_runner.go:211] docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:24:31.408706   16524 client.go:171] LocalClient.Create took 592.278732ms
	I0223 13:24:33.409291   16524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:24:33.409429   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:33.467327   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:33.467415   16524 retry.go:31] will retry after 131.115894ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:33.600966   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:33.656145   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:33.656232   16524 retry.go:31] will retry after 354.226106ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:34.010950   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:34.070096   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:34.070188   16524 retry.go:31] will retry after 645.532926ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:34.716869   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:34.774410   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	W0223 13:24:34.774505   16524 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	
	W0223 13:24:34.774522   16524 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:34.774579   16524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:24:34.774630   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:34.829483   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:34.829583   16524 retry.go:31] will retry after 365.177234ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:35.197180   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:35.258088   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:35.258178   16524 retry.go:31] will retry after 546.963466ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:35.806224   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:35.865189   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:35.865290   16524 retry.go:31] will retry after 662.135709ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:36.528661   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:36.589140   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	W0223 13:24:36.589239   16524 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	
	W0223 13:24:36.589259   16524 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:36.589263   16524 start.go:128] duration metric: createHost completed in 5.7947564s
	I0223 13:24:36.589338   16524 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:24:36.589387   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:36.645297   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:36.645388   16524 retry.go:31] will retry after 209.474648ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:36.857223   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:36.911864   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:36.911949   16524 retry.go:31] will retry after 229.636736ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:37.143902   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:37.202926   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:37.203023   16524 retry.go:31] will retry after 430.885647ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:37.634779   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:37.692060   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:37.692146   16524 retry.go:31] will retry after 438.898808ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:38.132161   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:38.189726   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	W0223 13:24:38.189831   16524 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	
	W0223 13:24:38.189847   16524 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:38.189904   16524 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:24:38.189951   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:38.244802   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:38.244896   16524 retry.go:31] will retry after 233.245182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:38.479046   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:38.537936   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:38.538023   16524 retry.go:31] will retry after 399.445217ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:38.939906   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:38.999976   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	I0223 13:24:39.000086   16524 retry.go:31] will retry after 694.219907ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:39.696932   16524 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000
	W0223 13:24:39.757958   16524 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000 returned with exit code 1
	W0223 13:24:39.758058   16524 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	
	W0223 13:24:39.758077   16524 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "custom-flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: custom-flannel-235000
	I0223 13:24:39.758082   16524 fix.go:57] fixHost completed within 28.799576893s
	I0223 13:24:39.758088   16524 start.go:83] releasing machines lock for "custom-flannel-235000", held for 28.799647688s
	W0223 13:24:39.758214   16524 out.go:239] * Failed to start docker container. Running "minikube delete -p custom-flannel-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for custom-flannel-235000 container: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p custom-flannel-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for custom-flannel-235000 container: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:24:39.800327   16524 out.go:177] 
	W0223 13:24:39.821535   16524 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for custom-flannel-235000 container: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for custom-flannel-235000 container: docker run --rm --name custom-flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-235000 --entrypoint /usr/bin/test -v custom-flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:24:39.821561   16524 out.go:239] * 
	* 
	W0223 13:24:39.822958   16524 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:24:39.885205   16524 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/custom-flannel/Start (40.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (35.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p false-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker : exit status 80 (35.492867s)

                                                
                                                
-- stdout --
	* [false-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node false-235000 in cluster false-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "false-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:24:48.672972   16945 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:24:48.673132   16945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:24:48.673137   16945 out.go:309] Setting ErrFile to fd 2...
	I0223 13:24:48.673141   16945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:24:48.673251   16945 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:24:48.674558   16945 out.go:303] Setting JSON to false
	I0223 13:24:48.692984   16945 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3263,"bootTime":1677184225,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:24:48.693105   16945 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:24:48.720186   16945 out.go:177] * [false-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:24:48.784531   16945 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:24:48.763813   16945 notify.go:220] Checking for updates...
	I0223 13:24:48.827185   16945 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:24:48.848785   16945 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:24:48.869742   16945 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:24:48.890557   16945 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:24:48.911754   16945 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:24:48.933474   16945 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:24:48.933654   16945 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:24:48.933796   16945 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:24:48.933852   16945 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:24:48.996097   16945 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:24:48.996234   16945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:24:49.138544   16945 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:24:49.046623449 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:24:49.160254   16945 out.go:177] * Using the docker driver based on user configuration
	I0223 13:24:49.181173   16945 start.go:296] selected driver: docker
	I0223 13:24:49.181207   16945 start.go:857] validating driver "docker" against <nil>
	I0223 13:24:49.181227   16945 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:24:49.185119   16945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:24:49.327297   16945 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:24:49.23572937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:
{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadowe
dPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:24:49.327394   16945 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:24:49.327567   16945 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:24:49.349202   16945 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:24:49.370957   16945 cni.go:84] Creating CNI manager for "false"
	I0223 13:24:49.371000   16945 start_flags.go:319] config:
	{Name:false-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:false-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRI
Socket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:24:49.414018   16945 out.go:177] * Starting control plane node false-235000 in cluster false-235000
	I0223 13:24:49.434799   16945 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:24:49.455817   16945 out.go:177] * Pulling base image ...
	I0223 13:24:49.497937   16945 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:24:49.497994   16945 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:24:49.498023   16945 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:24:49.498038   16945 cache.go:57] Caching tarball of preloaded images
	I0223 13:24:49.498280   16945 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:24:49.498300   16945 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:24:49.499286   16945 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/false-235000/config.json ...
	I0223 13:24:49.499425   16945 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/false-235000/config.json: {Name:mk11d660dbce0babba3fc543ba8400962c211def Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:24:49.558431   16945 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:24:49.558450   16945 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:24:49.558504   16945 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:24:49.558633   16945 start.go:364] acquiring machines lock for false-235000: {Name:mk1301a284557c3b3104db415c9b9b806bd10d30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:24:49.558776   16945 start.go:368] acquired machines lock for "false-235000" in 130.679µs
	I0223 13:24:49.558810   16945 start.go:93] Provisioning new machine with config: &{Name:false-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:false-235000 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:24:49.558876   16945 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:24:49.580540   16945 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:24:49.580877   16945 start.go:159] libmachine.API.Create for "false-235000" (driver="docker")
	I0223 13:24:49.580934   16945 client.go:168] LocalClient.Create starting
	I0223 13:24:49.581147   16945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:24:49.581232   16945 main.go:141] libmachine: Decoding PEM data...
	I0223 13:24:49.581270   16945 main.go:141] libmachine: Parsing certificate...
	I0223 13:24:49.581394   16945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:24:49.581475   16945 main.go:141] libmachine: Decoding PEM data...
	I0223 13:24:49.581488   16945 main.go:141] libmachine: Parsing certificate...
	I0223 13:24:49.582848   16945 cli_runner.go:164] Run: docker network inspect false-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:24:49.638881   16945 cli_runner.go:211] docker network inspect false-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:24:49.638973   16945 network_create.go:281] running [docker network inspect false-235000] to gather additional debugging logs...
	I0223 13:24:49.638990   16945 cli_runner.go:164] Run: docker network inspect false-235000
	W0223 13:24:49.693043   16945 cli_runner.go:211] docker network inspect false-235000 returned with exit code 1
	I0223 13:24:49.693078   16945 network_create.go:284] error running [docker network inspect false-235000]: docker network inspect false-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-235000
	I0223 13:24:49.693105   16945 network_create.go:286] output of [docker network inspect false-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-235000
	
	** /stderr **
	I0223 13:24:49.693195   16945 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:24:49.751250   16945 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:24:49.751565   16945 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011eae80}
	I0223 13:24:49.751578   16945 network_create.go:123] attempt to create docker network false-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:24:49.751647   16945 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-235000 false-235000
	W0223 13:24:49.805573   16945 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-235000 false-235000 returned with exit code 1
	W0223 13:24:49.805607   16945 network_create.go:148] failed to create docker network false-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-235000 false-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:24:49.805630   16945 network_create.go:115] failed to create docker network false-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:24:49.806949   16945 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:24:49.807262   16945 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0011ebce0}
	I0223 13:24:49.807272   16945 network_create.go:123] attempt to create docker network false-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:24:49.807337   16945 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-235000 false-235000
	I0223 13:24:49.894064   16945 network_create.go:107] docker network false-235000 192.168.67.0/24 created
	I0223 13:24:49.894094   16945 kic.go:117] calculated static IP "192.168.67.2" for the "false-235000" container
	I0223 13:24:49.894209   16945 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:24:49.951634   16945 cli_runner.go:164] Run: docker volume create false-235000 --label name.minikube.sigs.k8s.io=false-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:24:50.007300   16945 oci.go:103] Successfully created a docker volume false-235000
	I0223 13:24:50.007427   16945 cli_runner.go:164] Run: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:24:50.228660   16945 cli_runner.go:211] docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:24:50.228704   16945 client.go:171] LocalClient.Create took 647.760192ms
	I0223 13:24:52.230195   16945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:24:52.230329   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:24:52.287241   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:24:52.287358   16945 retry.go:31] will retry after 344.880823ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:24:52.632737   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:24:52.689235   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:24:52.689314   16945 retry.go:31] will retry after 434.778834ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:24:53.124478   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:24:53.182232   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:24:53.182314   16945 retry.go:31] will retry after 714.884768ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:24:53.899146   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:24:53.960529   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	W0223 13:24:53.960625   16945 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	
	W0223 13:24:53.960655   16945 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:24:53.960715   16945 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:24:53.960766   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:24:54.014967   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:24:54.015047   16945 retry.go:31] will retry after 328.440121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:24:54.345292   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:24:54.402436   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:24:54.402516   16945 retry.go:31] will retry after 438.464976ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:24:54.843475   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:24:54.900580   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:24:54.900663   16945 retry.go:31] will retry after 603.348803ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:24:55.506415   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:24:55.565640   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	W0223 13:24:55.565725   16945 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	
	W0223 13:24:55.565742   16945 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:24:55.565747   16945 start.go:128] duration metric: createHost completed in 6.006852609s
	I0223 13:24:55.565753   16945 start.go:83] releasing machines lock for "false-235000", held for 6.006955641s
	W0223 13:24:55.565768   16945 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for false-235000 container: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:24:55.566220   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:24:55.620607   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:24:55.620659   16945 delete.go:82] Unable to get host status for false-235000, assuming it has already been deleted: state: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	W0223 13:24:55.620808   16945 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for false-235000 container: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for false-235000 container: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:24:55.620817   16945 start.go:706] Will try again in 5 seconds ...
	I0223 13:25:00.621002   16945 start.go:364] acquiring machines lock for false-235000: {Name:mk1301a284557c3b3104db415c9b9b806bd10d30 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:25:00.621104   16945 start.go:368] acquired machines lock for "false-235000" in 81.466µs
	I0223 13:25:00.621133   16945 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:25:00.621141   16945 fix.go:55] fixHost starting: 
	I0223 13:25:00.621371   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:00.677944   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:00.677986   16945 fix.go:103] recreateIfNeeded on false-235000: state= err=unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:00.678006   16945 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:25:00.699778   16945 out.go:177] * docker "false-235000" container is missing, will recreate.
	I0223 13:25:00.743494   16945 delete.go:124] DEMOLISHING false-235000 ...
	I0223 13:25:00.743702   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:00.798422   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:25:00.798465   16945 stop.go:75] unable to get state: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:00.798487   16945 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:00.798895   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:00.853366   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:00.853423   16945 delete.go:82] Unable to get host status for false-235000, assuming it has already been deleted: state: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:00.853522   16945 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-235000
	W0223 13:25:00.909650   16945 cli_runner.go:211] docker container inspect -f {{.Id}} false-235000 returned with exit code 1
	I0223 13:25:00.909680   16945 kic.go:367] could not find the container false-235000 to remove it. will try anyways
	I0223 13:25:00.909774   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:00.965355   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:25:00.965396   16945 oci.go:84] error getting container status, will try to delete anyways: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:00.965506   16945 cli_runner.go:164] Run: docker exec --privileged -t false-235000 /bin/bash -c "sudo init 0"
	W0223 13:25:01.020019   16945 cli_runner.go:211] docker exec --privileged -t false-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:25:01.020046   16945 oci.go:641] error shutdown false-235000: docker exec --privileged -t false-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:02.020488   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:02.078233   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:02.078278   16945 oci.go:653] temporary error verifying shutdown: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:02.078287   16945 oci.go:655] temporary error: container false-235000 status is  but expect it to be exited
	I0223 13:25:02.078307   16945 retry.go:31] will retry after 421.416376ms: couldn't verify container is exited. %v: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:02.502111   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:02.561839   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:02.561878   16945 oci.go:653] temporary error verifying shutdown: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:02.561888   16945 oci.go:655] temporary error: container false-235000 status is  but expect it to be exited
	I0223 13:25:02.561912   16945 retry.go:31] will retry after 873.169033ms: couldn't verify container is exited. %v: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:03.437467   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:03.493955   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:03.494000   16945 oci.go:653] temporary error verifying shutdown: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:03.494013   16945 oci.go:655] temporary error: container false-235000 status is  but expect it to be exited
	I0223 13:25:03.494032   16945 retry.go:31] will retry after 988.158212ms: couldn't verify container is exited. %v: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:04.482556   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:04.541829   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:04.541877   16945 oci.go:653] temporary error verifying shutdown: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:04.541886   16945 oci.go:655] temporary error: container false-235000 status is  but expect it to be exited
	I0223 13:25:04.541907   16945 retry.go:31] will retry after 2.494537381s: couldn't verify container is exited. %v: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:07.037504   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:07.093526   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:07.093583   16945 oci.go:653] temporary error verifying shutdown: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:07.093593   16945 oci.go:655] temporary error: container false-235000 status is  but expect it to be exited
	I0223 13:25:07.093612   16945 retry.go:31] will retry after 3.332329592s: couldn't verify container is exited. %v: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:10.428389   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:10.484923   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:10.484967   16945 oci.go:653] temporary error verifying shutdown: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:10.484975   16945 oci.go:655] temporary error: container false-235000 status is  but expect it to be exited
	I0223 13:25:10.484995   16945 retry.go:31] will retry after 3.362070553s: couldn't verify container is exited. %v: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:13.848418   16945 cli_runner.go:164] Run: docker container inspect false-235000 --format={{.State.Status}}
	W0223 13:25:13.907995   16945 cli_runner.go:211] docker container inspect false-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:13.908045   16945 oci.go:653] temporary error verifying shutdown: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:13.908054   16945 oci.go:655] temporary error: container false-235000 status is  but expect it to be exited
	I0223 13:25:13.908081   16945 oci.go:88] couldn't shut down false-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "false-235000": docker container inspect false-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	 
	I0223 13:25:13.908161   16945 cli_runner.go:164] Run: docker rm -f -v false-235000
	I0223 13:25:13.966938   16945 cli_runner.go:164] Run: docker container inspect -f {{.Id}} false-235000
	W0223 13:25:14.021261   16945 cli_runner.go:211] docker container inspect -f {{.Id}} false-235000 returned with exit code 1
	I0223 13:25:14.021372   16945 cli_runner.go:164] Run: docker network inspect false-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:25:14.077536   16945 cli_runner.go:164] Run: docker network rm false-235000
	W0223 13:25:14.182881   16945 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:25:14.182899   16945 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:25:15.182957   16945 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:25:15.226557   16945 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:25:15.226761   16945 start.go:159] libmachine.API.Create for "false-235000" (driver="docker")
	I0223 13:25:15.226789   16945 client.go:168] LocalClient.Create starting
	I0223 13:25:15.226973   16945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:25:15.227061   16945 main.go:141] libmachine: Decoding PEM data...
	I0223 13:25:15.227084   16945 main.go:141] libmachine: Parsing certificate...
	I0223 13:25:15.227164   16945 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:25:15.227226   16945 main.go:141] libmachine: Decoding PEM data...
	I0223 13:25:15.227250   16945 main.go:141] libmachine: Parsing certificate...
	I0223 13:25:15.227869   16945 cli_runner.go:164] Run: docker network inspect false-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:25:15.284976   16945 cli_runner.go:211] docker network inspect false-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:25:15.285074   16945 network_create.go:281] running [docker network inspect false-235000] to gather additional debugging logs...
	I0223 13:25:15.285091   16945 cli_runner.go:164] Run: docker network inspect false-235000
	W0223 13:25:15.341221   16945 cli_runner.go:211] docker network inspect false-235000 returned with exit code 1
	I0223 13:25:15.341248   16945 network_create.go:284] error running [docker network inspect false-235000]: docker network inspect false-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-235000
	I0223 13:25:15.341262   16945 network_create.go:286] output of [docker network inspect false-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-235000
	
	** /stderr **
	I0223 13:25:15.341353   16945 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:25:15.398214   16945 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:25:15.399710   16945 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:25:15.401247   16945 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:25:15.401562   16945 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010aa8c0}
	I0223 13:25:15.401573   16945 network_create.go:123] attempt to create docker network false-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:25:15.401635   16945 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-235000 false-235000
	W0223 13:25:15.456392   16945 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-235000 false-235000 returned with exit code 1
	W0223 13:25:15.456421   16945 network_create.go:148] failed to create docker network false-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-235000 false-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:25:15.456435   16945 network_create.go:115] failed to create docker network false-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:25:15.457841   16945 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:25:15.458164   16945 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000b45640}
	I0223 13:25:15.458174   16945 network_create.go:123] attempt to create docker network false-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:25:15.458248   16945 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=false-235000 false-235000
	I0223 13:25:15.546127   16945 network_create.go:107] docker network false-235000 192.168.85.0/24 created
	I0223 13:25:15.546160   16945 kic.go:117] calculated static IP "192.168.85.2" for the "false-235000" container
	I0223 13:25:15.546276   16945 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:25:15.604270   16945 cli_runner.go:164] Run: docker volume create false-235000 --label name.minikube.sigs.k8s.io=false-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:25:15.657978   16945 oci.go:103] Successfully created a docker volume false-235000
	I0223 13:25:15.658095   16945 cli_runner.go:164] Run: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:25:15.796429   16945 cli_runner.go:211] docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:25:15.796479   16945 client.go:171] LocalClient.Create took 569.682842ms
	I0223 13:25:17.797919   16945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:25:17.798030   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:17.857490   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:17.857573   16945 retry.go:31] will retry after 277.830459ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:18.137757   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:18.196289   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:18.196372   16945 retry.go:31] will retry after 372.34647ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:18.571153   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:18.629599   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:18.629690   16945 retry.go:31] will retry after 826.587933ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:19.458652   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:19.517910   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	W0223 13:25:19.518002   16945 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	
	W0223 13:25:19.518026   16945 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:19.518089   16945 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:25:19.518138   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:19.573729   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:19.573821   16945 retry.go:31] will retry after 360.669603ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:19.936813   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:19.994732   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:19.994817   16945 retry.go:31] will retry after 212.222239ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:20.207257   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:20.266811   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:20.266903   16945 retry.go:31] will retry after 726.399986ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:20.993653   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:21.049628   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	W0223 13:25:21.049732   16945 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	
	W0223 13:25:21.049752   16945 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:21.049756   16945 start.go:128] duration metric: createHost completed in 5.866716897s
	I0223 13:25:21.049825   16945 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:25:21.049884   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:21.104752   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:21.104839   16945 retry.go:31] will retry after 302.669465ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:21.409937   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:21.469760   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:21.469840   16945 retry.go:31] will retry after 438.665777ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:21.908848   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:21.966358   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:21.966439   16945 retry.go:31] will retry after 746.445922ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:22.713520   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:22.770528   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	W0223 13:25:22.770620   16945 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	
	W0223 13:25:22.770639   16945 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:22.770696   16945 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:25:22.770743   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:22.824871   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:22.824963   16945 retry.go:31] will retry after 253.995875ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:23.079363   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:23.137875   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:23.137958   16945 retry.go:31] will retry after 356.540512ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:23.496295   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:23.556028   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	I0223 13:25:23.556114   16945 retry.go:31] will retry after 354.533414ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:23.913081   16945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000
	W0223 13:25:23.970660   16945 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000 returned with exit code 1
	W0223 13:25:23.970760   16945 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	
	W0223 13:25:23.970777   16945 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "false-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: false-235000
	I0223 13:25:23.970788   16945 fix.go:57] fixHost completed within 23.34959172s
	I0223 13:25:23.970794   16945 start.go:83] releasing machines lock for "false-235000", held for 23.349628567s
	W0223 13:25:23.970939   16945 out.go:239] * Failed to start docker container. Running "minikube delete -p false-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for false-235000 container: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p false-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for false-235000 container: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:25:24.013818   16945 out.go:177] 
	W0223 13:25:24.035937   16945 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for false-235000 container: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for false-235000 container: docker run --rm --name false-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-235000 --entrypoint /usr/bin/test -v false-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:25:24.035971   16945 out.go:239] * 
	* 
	W0223 13:25:24.037324   16945 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:25:24.098610   16945 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (35.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p enable-default-cni-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker : exit status 80 (36.119610414s)

                                                
                                                
-- stdout --
	* [enable-default-cni-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node enable-default-cni-235000 in cluster enable-default-cni-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "enable-default-cni-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:25:32.324919   17341 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:25:32.325067   17341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:25:32.325072   17341 out.go:309] Setting ErrFile to fd 2...
	I0223 13:25:32.325076   17341 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:25:32.325179   17341 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:25:32.326592   17341 out.go:303] Setting JSON to false
	I0223 13:25:32.344873   17341 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3307,"bootTime":1677184225,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:25:32.345014   17341 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:25:32.367188   17341 out.go:177] * [enable-default-cni-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:25:32.410350   17341 notify.go:220] Checking for updates...
	I0223 13:25:32.431690   17341 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:25:32.452904   17341 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:25:32.473819   17341 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:25:32.515879   17341 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:25:32.557852   17341 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:25:32.599804   17341 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:25:32.623690   17341 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:25:32.623850   17341 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:25:32.623947   17341 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:25:32.623997   17341 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:25:32.684952   17341 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:25:32.685102   17341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:25:32.828649   17341 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:25:32.735789217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:25:32.850621   17341 out.go:177] * Using the docker driver based on user configuration
	I0223 13:25:32.872220   17341 start.go:296] selected driver: docker
	I0223 13:25:32.872249   17341 start.go:857] validating driver "docker" against <nil>
	I0223 13:25:32.872270   17341 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:25:32.876167   17341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:25:33.018725   17341 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:25:32.926667686 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:25:33.018828   17341 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	E0223 13:25:33.018993   17341 start_flags.go:457] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0223 13:25:33.019014   17341 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:25:33.040587   17341 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:25:33.061863   17341 cni.go:84] Creating CNI manager for "bridge"
	I0223 13:25:33.061892   17341 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 13:25:33.061907   17341 start_flags.go:319] config:
	{Name:enable-default-cni-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:enable-default-cni-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:25:33.083708   17341 out.go:177] * Starting control plane node enable-default-cni-235000 in cluster enable-default-cni-235000
	I0223 13:25:33.105522   17341 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:25:33.126807   17341 out.go:177] * Pulling base image ...
	I0223 13:25:33.169567   17341 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:25:33.169572   17341 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:25:33.169622   17341 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:25:33.169631   17341 cache.go:57] Caching tarball of preloaded images
	I0223 13:25:33.169761   17341 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:25:33.169771   17341 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:25:33.170409   17341 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/enable-default-cni-235000/config.json ...
	I0223 13:25:33.170480   17341 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/enable-default-cni-235000/config.json: {Name:mke532a70945198fdffe73a805f609abf1281e75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:25:33.227890   17341 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:25:33.227907   17341 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:25:33.228011   17341 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:25:33.228061   17341 start.go:364] acquiring machines lock for enable-default-cni-235000: {Name:mk94bb08f171e3d96218a7fb0f63fb3cd3013aa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:25:33.228223   17341 start.go:368] acquired machines lock for "enable-default-cni-235000" in 149.448µs
	I0223 13:25:33.228262   17341 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:enable-default-cni-235000 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:25:33.228346   17341 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:25:33.272040   17341 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:25:33.272495   17341 start.go:159] libmachine.API.Create for "enable-default-cni-235000" (driver="docker")
	I0223 13:25:33.272572   17341 client.go:168] LocalClient.Create starting
	I0223 13:25:33.272816   17341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:25:33.272908   17341 main.go:141] libmachine: Decoding PEM data...
	I0223 13:25:33.272946   17341 main.go:141] libmachine: Parsing certificate...
	I0223 13:25:33.273066   17341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:25:33.273130   17341 main.go:141] libmachine: Decoding PEM data...
	I0223 13:25:33.273146   17341 main.go:141] libmachine: Parsing certificate...
	I0223 13:25:33.273986   17341 cli_runner.go:164] Run: docker network inspect enable-default-cni-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:25:33.329126   17341 cli_runner.go:211] docker network inspect enable-default-cni-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:25:33.329218   17341 network_create.go:281] running [docker network inspect enable-default-cni-235000] to gather additional debugging logs...
	I0223 13:25:33.329236   17341 cli_runner.go:164] Run: docker network inspect enable-default-cni-235000
	W0223 13:25:33.382565   17341 cli_runner.go:211] docker network inspect enable-default-cni-235000 returned with exit code 1
	I0223 13:25:33.382587   17341 network_create.go:284] error running [docker network inspect enable-default-cni-235000]: docker network inspect enable-default-cni-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-235000
	I0223 13:25:33.382598   17341 network_create.go:286] output of [docker network inspect enable-default-cni-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-235000
	
	** /stderr **
	I0223 13:25:33.382689   17341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:25:33.439257   17341 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:25:33.439604   17341 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d76710}
	I0223 13:25:33.439618   17341 network_create.go:123] attempt to create docker network enable-default-cni-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:25:33.439682   17341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-235000 enable-default-cni-235000
	W0223 13:25:33.494704   17341 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-235000 enable-default-cni-235000 returned with exit code 1
	W0223 13:25:33.494743   17341 network_create.go:148] failed to create docker network enable-default-cni-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-235000 enable-default-cni-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:25:33.494762   17341 network_create.go:115] failed to create docker network enable-default-cni-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:25:33.496130   17341 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:25:33.496446   17341 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00005be40}
	I0223 13:25:33.496456   17341 network_create.go:123] attempt to create docker network enable-default-cni-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:25:33.496531   17341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-235000 enable-default-cni-235000
	I0223 13:25:33.583350   17341 network_create.go:107] docker network enable-default-cni-235000 192.168.67.0/24 created
	I0223 13:25:33.583394   17341 kic.go:117] calculated static IP "192.168.67.2" for the "enable-default-cni-235000" container
	I0223 13:25:33.583523   17341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:25:33.641575   17341 cli_runner.go:164] Run: docker volume create enable-default-cni-235000 --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:25:33.696091   17341 oci.go:103] Successfully created a docker volume enable-default-cni-235000
	I0223 13:25:33.696224   17341 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:25:33.908171   17341 cli_runner.go:211] docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:25:33.908234   17341 client.go:171] LocalClient.Create took 635.651402ms
	I0223 13:25:35.910132   17341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:25:35.910228   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:35.968782   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:25:35.968910   17341 retry.go:31] will retry after 210.495838ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:36.180218   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:36.241063   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:25:36.241162   17341 retry.go:31] will retry after 399.838903ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:36.643418   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:36.701607   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:25:36.701692   17341 retry.go:31] will retry after 409.955044ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:37.112795   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:37.171880   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	W0223 13:25:37.172008   17341 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	
	W0223 13:25:37.172026   17341 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:37.172078   17341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:25:37.172134   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:37.226727   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:25:37.226813   17341 retry.go:31] will retry after 198.643536ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:37.427859   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:37.485494   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:25:37.485584   17341 retry.go:31] will retry after 393.621807ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:37.881583   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:37.939308   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:25:37.939392   17341 retry.go:31] will retry after 452.485121ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:38.392802   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:38.449359   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:25:38.449452   17341 retry.go:31] will retry after 672.482535ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:39.123321   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:25:39.181458   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	W0223 13:25:39.181554   17341 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	
	W0223 13:25:39.181583   17341 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:39.181595   17341 start.go:128] duration metric: createHost completed in 5.953231022s
	I0223 13:25:39.181601   17341 start.go:83] releasing machines lock for "enable-default-cni-235000", held for 5.953356485s
	W0223 13:25:39.181616   17341 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-235000 container: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:25:39.182024   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:39.236891   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:39.236938   17341 delete.go:82] Unable to get host status for enable-default-cni-235000, assuming it has already been deleted: state: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	W0223 13:25:39.237081   17341 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-235000 container: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-235000 container: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:25:39.237089   17341 start.go:706] Will try again in 5 seconds ...
	I0223 13:25:44.238129   17341 start.go:364] acquiring machines lock for enable-default-cni-235000: {Name:mk94bb08f171e3d96218a7fb0f63fb3cd3013aa9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:25:44.238288   17341 start.go:368] acquired machines lock for "enable-default-cni-235000" in 119.931µs
	I0223 13:25:44.238329   17341 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:25:44.238343   17341 fix.go:55] fixHost starting: 
	I0223 13:25:44.238828   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:44.297148   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:44.297197   17341 fix.go:103] recreateIfNeeded on enable-default-cni-235000: state= err=unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:44.297213   17341 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:25:44.318895   17341 out.go:177] * docker "enable-default-cni-235000" container is missing, will recreate.
	I0223 13:25:44.362893   17341 delete.go:124] DEMOLISHING enable-default-cni-235000 ...
	I0223 13:25:44.363144   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:44.419637   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:25:44.419680   17341 stop.go:75] unable to get state: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:44.419695   17341 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:44.420077   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:44.473903   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:44.473947   17341 delete.go:82] Unable to get host status for enable-default-cni-235000, assuming it has already been deleted: state: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:44.474028   17341 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-235000
	W0223 13:25:44.528811   17341 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-235000 returned with exit code 1
	I0223 13:25:44.528869   17341 kic.go:367] could not find the container enable-default-cni-235000 to remove it. will try anyways
	I0223 13:25:44.528950   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:44.582561   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:25:44.582623   17341 oci.go:84] error getting container status, will try to delete anyways: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:44.582720   17341 cli_runner.go:164] Run: docker exec --privileged -t enable-default-cni-235000 /bin/bash -c "sudo init 0"
	W0223 13:25:44.636186   17341 cli_runner.go:211] docker exec --privileged -t enable-default-cni-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:25:44.636245   17341 oci.go:641] error shutdown enable-default-cni-235000: docker exec --privileged -t enable-default-cni-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:45.638680   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:45.696110   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:45.696150   17341 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:45.696161   17341 oci.go:655] temporary error: container enable-default-cni-235000 status is  but expect it to be exited
	I0223 13:25:45.696178   17341 retry.go:31] will retry after 686.171767ms: couldn't verify container is exited. %v: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:46.383229   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:46.444961   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:46.445011   17341 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:46.445021   17341 oci.go:655] temporary error: container enable-default-cni-235000 status is  but expect it to be exited
	I0223 13:25:46.445041   17341 retry.go:31] will retry after 1.06812741s: couldn't verify container is exited. %v: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:47.513521   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:47.571046   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:47.571090   17341 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:47.571098   17341 oci.go:655] temporary error: container enable-default-cni-235000 status is  but expect it to be exited
	I0223 13:25:47.571129   17341 retry.go:31] will retry after 1.100996519s: couldn't verify container is exited. %v: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:48.673583   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:48.733698   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:48.733742   17341 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:48.733758   17341 oci.go:655] temporary error: container enable-default-cni-235000 status is  but expect it to be exited
	I0223 13:25:48.733780   17341 retry.go:31] will retry after 2.122941378s: couldn't verify container is exited. %v: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:50.859154   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:50.920133   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:50.920177   17341 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:50.920186   17341 oci.go:655] temporary error: container enable-default-cni-235000 status is  but expect it to be exited
	I0223 13:25:50.920206   17341 retry.go:31] will retry after 1.963162592s: couldn't verify container is exited. %v: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:52.884796   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:52.943599   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:52.943642   17341 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:52.943657   17341 oci.go:655] temporary error: container enable-default-cni-235000 status is  but expect it to be exited
	I0223 13:25:52.943677   17341 retry.go:31] will retry after 2.674548004s: couldn't verify container is exited. %v: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:55.620602   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:55.678227   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:55.678279   17341 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:55.678289   17341 oci.go:655] temporary error: container enable-default-cni-235000 status is  but expect it to be exited
	I0223 13:25:55.678308   17341 retry.go:31] will retry after 3.191512416s: couldn't verify container is exited. %v: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:58.871639   17341 cli_runner.go:164] Run: docker container inspect enable-default-cni-235000 --format={{.State.Status}}
	W0223 13:25:58.932580   17341 cli_runner.go:211] docker container inspect enable-default-cni-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:25:58.932624   17341 oci.go:653] temporary error verifying shutdown: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:25:58.932632   17341 oci.go:655] temporary error: container enable-default-cni-235000 status is  but expect it to be exited
	I0223 13:25:58.932666   17341 oci.go:88] couldn't shut down enable-default-cni-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "enable-default-cni-235000": docker container inspect enable-default-cni-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	 
	I0223 13:25:58.932738   17341 cli_runner.go:164] Run: docker rm -f -v enable-default-cni-235000
	I0223 13:25:58.988756   17341 cli_runner.go:164] Run: docker container inspect -f {{.Id}} enable-default-cni-235000
	W0223 13:25:59.042480   17341 cli_runner.go:211] docker container inspect -f {{.Id}} enable-default-cni-235000 returned with exit code 1
	I0223 13:25:59.042601   17341 cli_runner.go:164] Run: docker network inspect enable-default-cni-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:25:59.098007   17341 cli_runner.go:164] Run: docker network rm enable-default-cni-235000
	W0223 13:25:59.203437   17341 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:25:59.203457   17341 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:26:00.204069   17341 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:26:00.226347   17341 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:26:00.226510   17341 start.go:159] libmachine.API.Create for "enable-default-cni-235000" (driver="docker")
	I0223 13:26:00.226555   17341 client.go:168] LocalClient.Create starting
	I0223 13:26:00.226759   17341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:26:00.226853   17341 main.go:141] libmachine: Decoding PEM data...
	I0223 13:26:00.226877   17341 main.go:141] libmachine: Parsing certificate...
	I0223 13:26:00.226977   17341 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:26:00.227042   17341 main.go:141] libmachine: Decoding PEM data...
	I0223 13:26:00.227063   17341 main.go:141] libmachine: Parsing certificate...
	I0223 13:26:00.227719   17341 cli_runner.go:164] Run: docker network inspect enable-default-cni-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:26:00.284957   17341 cli_runner.go:211] docker network inspect enable-default-cni-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:26:00.285055   17341 network_create.go:281] running [docker network inspect enable-default-cni-235000] to gather additional debugging logs...
	I0223 13:26:00.285072   17341 cli_runner.go:164] Run: docker network inspect enable-default-cni-235000
	W0223 13:26:00.341307   17341 cli_runner.go:211] docker network inspect enable-default-cni-235000 returned with exit code 1
	I0223 13:26:00.341330   17341 network_create.go:284] error running [docker network inspect enable-default-cni-235000]: docker network inspect enable-default-cni-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: enable-default-cni-235000
	I0223 13:26:00.341342   17341 network_create.go:286] output of [docker network inspect enable-default-cni-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: enable-default-cni-235000
	
	** /stderr **
	I0223 13:26:00.341434   17341 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:26:00.397425   17341 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:00.398904   17341 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:00.400382   17341 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:00.400693   17341 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010c0600}
	I0223 13:26:00.400705   17341 network_create.go:123] attempt to create docker network enable-default-cni-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:26:00.400771   17341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-235000 enable-default-cni-235000
	W0223 13:26:00.454361   17341 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-235000 enable-default-cni-235000 returned with exit code 1
	W0223 13:26:00.454393   17341 network_create.go:148] failed to create docker network enable-default-cni-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-235000 enable-default-cni-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:26:00.454407   17341 network_create.go:115] failed to create docker network enable-default-cni-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:26:00.455725   17341 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:00.456033   17341 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010c1450}
	I0223 13:26:00.456043   17341 network_create.go:123] attempt to create docker network enable-default-cni-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:26:00.456116   17341 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=enable-default-cni-235000 enable-default-cni-235000
	I0223 13:26:00.542531   17341 network_create.go:107] docker network enable-default-cni-235000 192.168.85.0/24 created
	I0223 13:26:00.542563   17341 kic.go:117] calculated static IP "192.168.85.2" for the "enable-default-cni-235000" container
	I0223 13:26:00.542674   17341 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:26:00.600118   17341 cli_runner.go:164] Run: docker volume create enable-default-cni-235000 --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:26:00.654065   17341 oci.go:103] Successfully created a docker volume enable-default-cni-235000
	I0223 13:26:00.654179   17341 cli_runner.go:164] Run: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:26:00.791080   17341 cli_runner.go:211] docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:26:00.791119   17341 client.go:171] LocalClient.Create took 564.554964ms
	I0223 13:26:02.793558   17341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:26:02.793703   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:02.853779   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:02.853877   17341 retry.go:31] will retry after 225.647498ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:03.080335   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:03.137397   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:03.137486   17341 retry.go:31] will retry after 201.618298ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:03.340384   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:03.396644   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:03.396733   17341 retry.go:31] will retry after 618.28431ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:04.015697   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:04.075054   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	W0223 13:26:04.075153   17341 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	
	W0223 13:26:04.075165   17341 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:04.075225   17341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:26:04.075271   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:04.128625   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:04.128713   17341 retry.go:31] will retry after 228.757437ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:04.358801   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:04.418085   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:04.418179   17341 retry.go:31] will retry after 495.59046ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:04.914199   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:04.973311   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:04.973406   17341 retry.go:31] will retry after 331.479361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:05.305198   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:05.363028   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	W0223 13:26:05.363134   17341 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	
	W0223 13:26:05.363148   17341 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:05.363159   17341 start.go:128] duration metric: createHost completed in 5.15905891s
	I0223 13:26:05.363235   17341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:26:05.363287   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:05.418011   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:05.418106   17341 retry.go:31] will retry after 181.373598ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:05.600057   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:05.658132   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:05.658223   17341 retry.go:31] will retry after 378.408995ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:06.038462   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:06.096837   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:06.096934   17341 retry.go:31] will retry after 568.063097ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:06.666330   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:06.727228   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	W0223 13:26:06.727322   17341 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	
	W0223 13:26:06.727336   17341 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:06.727404   17341 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:26:06.727453   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:06.783766   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:06.783852   17341 retry.go:31] will retry after 282.178076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:07.067286   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:07.126500   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:07.126595   17341 retry.go:31] will retry after 366.326365ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:07.493402   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:07.551174   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	I0223 13:26:07.551264   17341 retry.go:31] will retry after 612.496359ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:08.166144   17341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000
	W0223 13:26:08.225777   17341 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000 returned with exit code 1
	W0223 13:26:08.225870   17341 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	
	W0223 13:26:08.225882   17341 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "enable-default-cni-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" enable-default-cni-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: enable-default-cni-235000
	I0223 13:26:08.225887   17341 fix.go:57] fixHost completed within 23.987488847s
	I0223 13:26:08.225894   17341 start.go:83] releasing machines lock for "enable-default-cni-235000", held for 23.987537509s
	W0223 13:26:08.226043   17341 out.go:239] * Failed to start docker container. Running "minikube delete -p enable-default-cni-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-235000 container: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p enable-default-cni-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-235000 container: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:26:08.269706   17341 out.go:177] 
	W0223 13:26:08.291760   17341 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-235000 container: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for enable-default-cni-235000 container: docker run --rm --name enable-default-cni-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=enable-default-cni-235000 --entrypoint /usr/bin/test -v enable-default-cni-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:26:08.291794   17341 out.go:239] * 
	* 
	W0223 13:26:08.293013   17341 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:26:08.356751   17341 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/enable-default-cni/Start (36.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (43.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p flannel-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker : exit status 80 (43.217716968s)

                                                
                                                
-- stdout --
	* [flannel-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node flannel-235000 in cluster flannel-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "flannel-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:26:16.684241   17750 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:26:16.684413   17750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:26:16.684418   17750 out.go:309] Setting ErrFile to fd 2...
	I0223 13:26:16.684422   17750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:26:16.684526   17750 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:26:16.685820   17750 out.go:303] Setting JSON to false
	I0223 13:26:16.704028   17750 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3351,"bootTime":1677184225,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:26:16.704096   17750 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:26:16.725789   17750 out.go:177] * [flannel-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:26:16.767952   17750 notify.go:220] Checking for updates...
	I0223 13:26:16.788783   17750 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:26:16.846727   17750 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:26:16.891139   17750 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:26:16.939047   17750 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:26:17.012201   17750 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:26:17.055657   17750 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:26:17.100205   17750 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:26:17.100478   17750 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:26:17.100688   17750 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:26:17.100758   17750 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:26:17.162944   17750 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:26:17.163088   17750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:26:17.309174   17750 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:52 SystemTime:2023-02-23 21:26:17.214907794 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:26:17.351597   17750 out.go:177] * Using the docker driver based on user configuration
	I0223 13:26:17.372584   17750 start.go:296] selected driver: docker
	I0223 13:26:17.372599   17750 start.go:857] validating driver "docker" against <nil>
	I0223 13:26:17.372631   17750 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:26:17.375536   17750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:26:17.524733   17750 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:64 OomKillDisable:false NGoroutines:54 SystemTime:2023-02-23 21:26:17.431639057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:26:17.524850   17750 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:26:17.525025   17750 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:26:17.548051   17750 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:26:17.568890   17750 cni.go:84] Creating CNI manager for "flannel"
	I0223 13:26:17.568909   17750 start_flags.go:314] Found "Flannel" CNI - setting NetworkPlugin=cni
	I0223 13:26:17.568921   17750 start_flags.go:319] config:
	{Name:flannel-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:flannel-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:26:17.611868   17750 out.go:177] * Starting control plane node flannel-235000 in cluster flannel-235000
	I0223 13:26:17.633047   17750 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:26:17.654633   17750 out.go:177] * Pulling base image ...
	I0223 13:26:17.696759   17750 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:26:17.696829   17750 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:26:17.696922   17750 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:26:17.696944   17750 cache.go:57] Caching tarball of preloaded images
	I0223 13:26:17.697698   17750 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:26:17.697887   17750 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:26:17.698380   17750 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/flannel-235000/config.json ...
	I0223 13:26:17.698459   17750 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/flannel-235000/config.json: {Name:mk54bd83b74cdd022d65646651fcac96e7d4cf1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:26:17.755542   17750 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:26:17.755557   17750 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:26:17.755578   17750 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:26:17.755623   17750 start.go:364] acquiring machines lock for flannel-235000: {Name:mkfcafdf92a7a5de9f4ef918a86020ba0ce1850b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:26:17.755786   17750 start.go:368] acquired machines lock for "flannel-235000" in 143.419µs
	I0223 13:26:17.755821   17750 start.go:93] Provisioning new machine with config: &{Name:flannel-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:flannel-235000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:flannel NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:26:17.755881   17750 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:26:17.777735   17750 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:26:17.778131   17750 start.go:159] libmachine.API.Create for "flannel-235000" (driver="docker")
	I0223 13:26:17.778180   17750 client.go:168] LocalClient.Create starting
	I0223 13:26:17.778447   17750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:26:17.778527   17750 main.go:141] libmachine: Decoding PEM data...
	I0223 13:26:17.778561   17750 main.go:141] libmachine: Parsing certificate...
	I0223 13:26:17.778671   17750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:26:17.778726   17750 main.go:141] libmachine: Decoding PEM data...
	I0223 13:26:17.778743   17750 main.go:141] libmachine: Parsing certificate...
	I0223 13:26:17.779675   17750 cli_runner.go:164] Run: docker network inspect flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:26:17.836027   17750 cli_runner.go:211] docker network inspect flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:26:17.836142   17750 network_create.go:281] running [docker network inspect flannel-235000] to gather additional debugging logs...
	I0223 13:26:17.836158   17750 cli_runner.go:164] Run: docker network inspect flannel-235000
	W0223 13:26:17.891073   17750 cli_runner.go:211] docker network inspect flannel-235000 returned with exit code 1
	I0223 13:26:17.891100   17750 network_create.go:284] error running [docker network inspect flannel-235000]: docker network inspect flannel-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: flannel-235000
	I0223 13:26:17.891112   17750 network_create.go:286] output of [docker network inspect flannel-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: flannel-235000
	
	** /stderr **
	I0223 13:26:17.891225   17750 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:26:17.952361   17750 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:17.952957   17750 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000174e00}
	I0223 13:26:17.952978   17750 network_create.go:123] attempt to create docker network flannel-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:26:17.953089   17750 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-235000 flannel-235000
	W0223 13:26:18.039752   17750 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-235000 flannel-235000 returned with exit code 1
	W0223 13:26:18.039780   17750 network_create.go:148] failed to create docker network flannel-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-235000 flannel-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:26:18.039797   17750 network_create.go:115] failed to create docker network flannel-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:26:18.041133   17750 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:18.041534   17750 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000362370}
	I0223 13:26:18.041550   17750 network_create.go:123] attempt to create docker network flannel-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:26:18.041630   17750 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-235000 flannel-235000
	I0223 13:26:18.131434   17750 network_create.go:107] docker network flannel-235000 192.168.67.0/24 created
	I0223 13:26:18.131463   17750 kic.go:117] calculated static IP "192.168.67.2" for the "flannel-235000" container
	I0223 13:26:18.131591   17750 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:26:18.190700   17750 cli_runner.go:164] Run: docker volume create flannel-235000 --label name.minikube.sigs.k8s.io=flannel-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:26:18.246511   17750 oci.go:103] Successfully created a docker volume flannel-235000
	I0223 13:26:18.246632   17750 cli_runner.go:164] Run: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:26:18.467941   17750 cli_runner.go:211] docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:26:18.467985   17750 client.go:171] LocalClient.Create took 689.79625ms
	I0223 13:26:20.469357   17750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:26:20.469493   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:20.529149   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:20.529271   17750 retry.go:31] will retry after 320.281858ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:20.851864   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:20.912669   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:20.912748   17750 retry.go:31] will retry after 366.270381ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:21.280409   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:21.341070   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:21.341155   17750 retry.go:31] will retry after 566.55575ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:21.909400   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:21.965965   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	W0223 13:26:21.966057   17750 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	
	W0223 13:26:21.966082   17750 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:21.966143   17750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:26:21.966197   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:22.083785   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:22.083868   17750 retry.go:31] will retry after 219.587112ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:22.305076   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:22.379112   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:22.379195   17750 retry.go:31] will retry after 450.047747ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:22.831353   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:22.888406   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:22.888488   17750 retry.go:31] will retry after 435.305999ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:23.324069   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:23.378426   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:23.378509   17750 retry.go:31] will retry after 450.019871ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:23.829157   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:23.882818   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	W0223 13:26:23.882922   17750 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	
	W0223 13:26:23.882938   17750 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:23.882943   17750 start.go:128] duration metric: createHost completed in 6.127043594s
	I0223 13:26:23.882950   17750 start.go:83] releasing machines lock for "flannel-235000", held for 6.127141938s
	W0223 13:26:23.882965   17750 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for flannel-235000 container: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:26:23.883419   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:23.937376   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:23.937431   17750 delete.go:82] Unable to get host status for flannel-235000, assuming it has already been deleted: state: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	W0223 13:26:23.937572   17750 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for flannel-235000 container: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for flannel-235000 container: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:26:23.937580   17750 start.go:706] Will try again in 5 seconds ...
	I0223 13:26:28.938405   17750 start.go:364] acquiring machines lock for flannel-235000: {Name:mkfcafdf92a7a5de9f4ef918a86020ba0ce1850b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:26:28.938519   17750 start.go:368] acquired machines lock for "flannel-235000" in 92.317µs
	I0223 13:26:28.938543   17750 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:26:28.938558   17750 fix.go:55] fixHost starting: 
	I0223 13:26:28.938805   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:28.993166   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:28.993209   17750 fix.go:103] recreateIfNeeded on flannel-235000: state= err=unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:28.993226   17750 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:26:29.023425   17750 out.go:177] * docker "flannel-235000" container is missing, will recreate.
	I0223 13:26:29.065493   17750 delete.go:124] DEMOLISHING flannel-235000 ...
	I0223 13:26:29.065601   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:29.119189   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:26:29.119238   17750 stop.go:75] unable to get state: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:29.119252   17750 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:29.119625   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:29.173396   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:29.173443   17750 delete.go:82] Unable to get host status for flannel-235000, assuming it has already been deleted: state: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:29.173533   17750 cli_runner.go:164] Run: docker container inspect -f {{.Id}} flannel-235000
	W0223 13:26:29.228528   17750 cli_runner.go:211] docker container inspect -f {{.Id}} flannel-235000 returned with exit code 1
	I0223 13:26:29.228588   17750 kic.go:367] could not find the container flannel-235000 to remove it. will try anyways
	I0223 13:26:29.228669   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:29.282738   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:26:29.282780   17750 oci.go:84] error getting container status, will try to delete anyways: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:29.282873   17750 cli_runner.go:164] Run: docker exec --privileged -t flannel-235000 /bin/bash -c "sudo init 0"
	W0223 13:26:29.337227   17750 cli_runner.go:211] docker exec --privileged -t flannel-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:26:29.337264   17750 oci.go:641] error shutdown flannel-235000: docker exec --privileged -t flannel-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:30.337478   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:30.392468   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:30.392512   17750 oci.go:653] temporary error verifying shutdown: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:30.392525   17750 oci.go:655] temporary error: container flannel-235000 status is  but expect it to be exited
	I0223 13:26:30.392545   17750 retry.go:31] will retry after 489.895753ms: couldn't verify container is exited. %v: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:30.883144   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:30.938542   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:30.938586   17750 oci.go:653] temporary error verifying shutdown: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:30.938597   17750 oci.go:655] temporary error: container flannel-235000 status is  but expect it to be exited
	I0223 13:26:30.938617   17750 retry.go:31] will retry after 1.122539636s: couldn't verify container is exited. %v: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:32.061527   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:32.120655   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:32.120698   17750 oci.go:653] temporary error verifying shutdown: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:32.120707   17750 oci.go:655] temporary error: container flannel-235000 status is  but expect it to be exited
	I0223 13:26:32.120727   17750 retry.go:31] will retry after 1.667761219s: couldn't verify container is exited. %v: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:33.789711   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:33.844200   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:33.844252   17750 oci.go:653] temporary error verifying shutdown: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:33.844261   17750 oci.go:655] temporary error: container flannel-235000 status is  but expect it to be exited
	I0223 13:26:33.844281   17750 retry.go:31] will retry after 1.272271585s: couldn't verify container is exited. %v: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:35.118941   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:35.176677   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:35.176719   17750 oci.go:653] temporary error verifying shutdown: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:35.176728   17750 oci.go:655] temporary error: container flannel-235000 status is  but expect it to be exited
	I0223 13:26:35.176748   17750 retry.go:31] will retry after 3.471769221s: couldn't verify container is exited. %v: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:38.649240   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:38.707260   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:38.707303   17750 oci.go:653] temporary error verifying shutdown: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:38.707311   17750 oci.go:655] temporary error: container flannel-235000 status is  but expect it to be exited
	I0223 13:26:38.707330   17750 retry.go:31] will retry after 3.449328743s: couldn't verify container is exited. %v: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:42.156848   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:42.211183   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:42.211227   17750 oci.go:653] temporary error verifying shutdown: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:42.211236   17750 oci.go:655] temporary error: container flannel-235000 status is  but expect it to be exited
	I0223 13:26:42.211255   17750 retry.go:31] will retry after 7.141282137s: couldn't verify container is exited. %v: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:49.354959   17750 cli_runner.go:164] Run: docker container inspect flannel-235000 --format={{.State.Status}}
	W0223 13:26:49.414763   17750 cli_runner.go:211] docker container inspect flannel-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:26:49.414807   17750 oci.go:653] temporary error verifying shutdown: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:49.414815   17750 oci.go:655] temporary error: container flannel-235000 status is  but expect it to be exited
	I0223 13:26:49.414841   17750 oci.go:88] couldn't shut down flannel-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "flannel-235000": docker container inspect flannel-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	 
	I0223 13:26:49.414931   17750 cli_runner.go:164] Run: docker rm -f -v flannel-235000
	I0223 13:26:49.471872   17750 cli_runner.go:164] Run: docker container inspect -f {{.Id}} flannel-235000
	W0223 13:26:49.526271   17750 cli_runner.go:211] docker container inspect -f {{.Id}} flannel-235000 returned with exit code 1
	I0223 13:26:49.526389   17750 cli_runner.go:164] Run: docker network inspect flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:26:49.581803   17750 cli_runner.go:164] Run: docker network rm flannel-235000
	W0223 13:26:49.696992   17750 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:26:49.697012   17750 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:26:50.699226   17750 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:26:50.722453   17750 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:26:50.722693   17750 start.go:159] libmachine.API.Create for "flannel-235000" (driver="docker")
	I0223 13:26:50.722736   17750 client.go:168] LocalClient.Create starting
	I0223 13:26:50.722953   17750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:26:50.723046   17750 main.go:141] libmachine: Decoding PEM data...
	I0223 13:26:50.723071   17750 main.go:141] libmachine: Parsing certificate...
	I0223 13:26:50.723163   17750 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:26:50.723226   17750 main.go:141] libmachine: Decoding PEM data...
	I0223 13:26:50.723254   17750 main.go:141] libmachine: Parsing certificate...
	I0223 13:26:50.744547   17750 cli_runner.go:164] Run: docker network inspect flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:26:50.802340   17750 cli_runner.go:211] docker network inspect flannel-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:26:50.802453   17750 network_create.go:281] running [docker network inspect flannel-235000] to gather additional debugging logs...
	I0223 13:26:50.802471   17750 cli_runner.go:164] Run: docker network inspect flannel-235000
	W0223 13:26:50.857050   17750 cli_runner.go:211] docker network inspect flannel-235000 returned with exit code 1
	I0223 13:26:50.857077   17750 network_create.go:284] error running [docker network inspect flannel-235000]: docker network inspect flannel-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: flannel-235000
	I0223 13:26:50.857090   17750 network_create.go:286] output of [docker network inspect flannel-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: flannel-235000
	
	** /stderr **
	I0223 13:26:50.857180   17750 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:26:50.913594   17750 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:50.915085   17750 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:50.916350   17750 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:50.916657   17750 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010232b0}
	I0223 13:26:50.916669   17750 network_create.go:123] attempt to create docker network flannel-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:26:50.916734   17750 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-235000 flannel-235000
	W0223 13:26:50.972056   17750 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-235000 flannel-235000 returned with exit code 1
	W0223 13:26:50.972098   17750 network_create.go:148] failed to create docker network flannel-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-235000 flannel-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:26:50.972111   17750 network_create.go:115] failed to create docker network flannel-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:26:50.973427   17750 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:26:50.973720   17750 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001136110}
	I0223 13:26:50.973730   17750 network_create.go:123] attempt to create docker network flannel-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:26:50.973791   17750 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=flannel-235000 flannel-235000
	I0223 13:26:51.060583   17750 network_create.go:107] docker network flannel-235000 192.168.85.0/24 created
	I0223 13:26:51.060611   17750 kic.go:117] calculated static IP "192.168.85.2" for the "flannel-235000" container
	I0223 13:26:51.060732   17750 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:26:51.117545   17750 cli_runner.go:164] Run: docker volume create flannel-235000 --label name.minikube.sigs.k8s.io=flannel-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:26:51.171204   17750 oci.go:103] Successfully created a docker volume flannel-235000
	I0223 13:26:51.171321   17750 cli_runner.go:164] Run: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:26:51.305738   17750 cli_runner.go:211] docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:26:51.305778   17750 client.go:171] LocalClient.Create took 583.033244ms
	I0223 13:26:53.306040   17750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:26:53.306137   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:53.363559   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:53.363649   17750 retry.go:31] will retry after 180.435966ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:53.546402   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:53.605333   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:53.605417   17750 retry.go:31] will retry after 375.528509ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:53.983303   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:54.044504   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:54.044594   17750 retry.go:31] will retry after 435.116576ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:54.480232   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:54.535770   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:54.535854   17750 retry.go:31] will retry after 493.922538ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:55.030778   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:55.088586   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	W0223 13:26:55.088679   17750 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	
	W0223 13:26:55.088696   17750 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:55.088761   17750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:26:55.088810   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:55.144079   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:55.144162   17750 retry.go:31] will retry after 326.255516ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:55.470777   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:55.531281   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:55.531368   17750 retry.go:31] will retry after 312.130904ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:55.845805   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:55.904848   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:55.904940   17750 retry.go:31] will retry after 734.40799ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:56.641671   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:56.717018   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	W0223 13:26:56.717112   17750 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	
	W0223 13:26:56.717137   17750 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:56.717149   17750 start.go:128] duration metric: createHost completed in 6.017882539s
	I0223 13:26:56.717222   17750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:26:56.717282   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:56.773187   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:56.773270   17750 retry.go:31] will retry after 316.641543ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:57.091410   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:57.150273   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:57.150362   17750 retry.go:31] will retry after 285.553176ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:57.436866   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:57.492717   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:57.492803   17750 retry.go:31] will retry after 504.239478ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:57.998946   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:58.058010   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	W0223 13:26:58.058100   17750 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	
	W0223 13:26:58.058119   17750 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:58.058178   17750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:26:58.058226   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:58.113293   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:58.113375   17750 retry.go:31] will retry after 184.594533ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:58.298913   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:58.358020   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:58.358101   17750 retry.go:31] will retry after 487.410536ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:58.847849   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:58.907802   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	I0223 13:26:58.907884   17750 retry.go:31] will retry after 716.390876ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:59.625635   17750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000
	W0223 13:26:59.681842   17750 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000 returned with exit code 1
	W0223 13:26:59.681928   17750 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	
	W0223 13:26:59.681945   17750 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "flannel-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" flannel-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: flannel-235000
	I0223 13:26:59.681954   17750 fix.go:57] fixHost completed within 30.743330302s
	I0223 13:26:59.681962   17750 start.go:83] releasing machines lock for "flannel-235000", held for 30.743364183s
	W0223 13:26:59.682132   17750 out.go:239] * Failed to start docker container. Running "minikube delete -p flannel-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for flannel-235000 container: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p flannel-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for flannel-235000 container: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:26:59.725777   17750 out.go:177] 
	W0223 13:26:59.747853   17750 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for flannel-235000 container: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for flannel-235000 container: docker run --rm --name flannel-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=flannel-235000 --entrypoint /usr/bin/test -v flannel-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:26:59.747887   17750 out.go:239] * 
	* 
	W0223 13:26:59.748970   17750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:26:59.833746   17750 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/flannel/Start (43.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (40.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p bridge-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker : exit status 80 (40.300496093s)

                                                
                                                
-- stdout --
	* [bridge-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node bridge-235000 in cluster bridge-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "bridge-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:27:08.626582   18241 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:27:08.626744   18241 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:27:08.626749   18241 out.go:309] Setting ErrFile to fd 2...
	I0223 13:27:08.626753   18241 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:27:08.626853   18241 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:27:08.628293   18241 out.go:303] Setting JSON to false
	I0223 13:27:08.647064   18241 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3403,"bootTime":1677184225,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:27:08.647142   18241 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:27:08.668605   18241 out.go:177] * [bridge-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:27:08.690592   18241 notify.go:220] Checking for updates...
	I0223 13:27:08.711472   18241 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:27:08.755414   18241 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:27:08.776615   18241 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:27:08.797556   18241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:27:08.839495   18241 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:27:08.897394   18241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:27:08.919314   18241 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:27:08.919459   18241 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:27:08.919606   18241 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:27:08.919677   18241 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:27:08.983458   18241 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:27:08.983575   18241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:27:09.128133   18241 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:27:09.034210957 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:27:09.150438   18241 out.go:177] * Using the docker driver based on user configuration
	I0223 13:27:09.193034   18241 start.go:296] selected driver: docker
	I0223 13:27:09.193091   18241 start.go:857] validating driver "docker" against <nil>
	I0223 13:27:09.193110   18241 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:27:09.197038   18241 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:27:09.343443   18241 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:27:09.249572905 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:27:09.343553   18241 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:27:09.343724   18241 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:27:09.367481   18241 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:27:09.388625   18241 cni.go:84] Creating CNI manager for "bridge"
	I0223 13:27:09.388646   18241 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 13:27:09.388678   18241 start_flags.go:319] config:
	{Name:bridge-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:bridge-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:27:09.410597   18241 out.go:177] * Starting control plane node bridge-235000 in cluster bridge-235000
	I0223 13:27:09.431601   18241 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:27:09.452479   18241 out.go:177] * Pulling base image ...
	I0223 13:27:09.494708   18241 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:27:09.494794   18241 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:27:09.494796   18241 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:27:09.494812   18241 cache.go:57] Caching tarball of preloaded images
	I0223 13:27:09.495021   18241 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:27:09.495042   18241 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:27:09.496065   18241 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/bridge-235000/config.json ...
	I0223 13:27:09.496270   18241 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/bridge-235000/config.json: {Name:mkf76fac62ed161e6f0ee15d46c8590bf2d75445 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:27:09.552185   18241 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:27:09.552217   18241 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:27:09.552242   18241 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:27:09.552293   18241 start.go:364] acquiring machines lock for bridge-235000: {Name:mk657d01ef7249ae2c5e4363eef7032a9cbaecaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:27:09.552456   18241 start.go:368] acquired machines lock for "bridge-235000" in 151.184µs
	I0223 13:27:09.552489   18241 start.go:93] Provisioning new machine with config: &{Name:bridge-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:bridge-235000 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:27:09.552562   18241 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:27:09.574567   18241 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:27:09.575063   18241 start.go:159] libmachine.API.Create for "bridge-235000" (driver="docker")
	I0223 13:27:09.575118   18241 client.go:168] LocalClient.Create starting
	I0223 13:27:09.575384   18241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:27:09.575481   18241 main.go:141] libmachine: Decoding PEM data...
	I0223 13:27:09.575516   18241 main.go:141] libmachine: Parsing certificate...
	I0223 13:27:09.575615   18241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:27:09.575681   18241 main.go:141] libmachine: Decoding PEM data...
	I0223 13:27:09.575702   18241 main.go:141] libmachine: Parsing certificate...
	I0223 13:27:09.596779   18241 cli_runner.go:164] Run: docker network inspect bridge-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:27:09.653254   18241 cli_runner.go:211] docker network inspect bridge-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:27:09.653358   18241 network_create.go:281] running [docker network inspect bridge-235000] to gather additional debugging logs...
	I0223 13:27:09.653374   18241 cli_runner.go:164] Run: docker network inspect bridge-235000
	W0223 13:27:09.707876   18241 cli_runner.go:211] docker network inspect bridge-235000 returned with exit code 1
	I0223 13:27:09.707906   18241 network_create.go:284] error running [docker network inspect bridge-235000]: docker network inspect bridge-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-235000
	I0223 13:27:09.707917   18241 network_create.go:286] output of [docker network inspect bridge-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-235000
	
	** /stderr **
	I0223 13:27:09.707998   18241 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:27:09.763870   18241 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:27:09.764218   18241 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00005a8e0}
	I0223 13:27:09.764231   18241 network_create.go:123] attempt to create docker network bridge-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:27:09.764305   18241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-235000 bridge-235000
	W0223 13:27:09.819330   18241 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-235000 bridge-235000 returned with exit code 1
	W0223 13:27:09.819361   18241 network_create.go:148] failed to create docker network bridge-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-235000 bridge-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:27:09.819377   18241 network_create.go:115] failed to create docker network bridge-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:27:09.820758   18241 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:27:09.821086   18241 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0008adc60}
	I0223 13:27:09.821097   18241 network_create.go:123] attempt to create docker network bridge-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:27:09.821165   18241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-235000 bridge-235000
	I0223 13:27:09.907563   18241 network_create.go:107] docker network bridge-235000 192.168.67.0/24 created
	I0223 13:27:09.907592   18241 kic.go:117] calculated static IP "192.168.67.2" for the "bridge-235000" container
	I0223 13:27:09.907723   18241 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:27:09.966156   18241 cli_runner.go:164] Run: docker volume create bridge-235000 --label name.minikube.sigs.k8s.io=bridge-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:27:10.022338   18241 oci.go:103] Successfully created a docker volume bridge-235000
	I0223 13:27:10.022455   18241 cli_runner.go:164] Run: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:27:10.231970   18241 cli_runner.go:211] docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:27:10.232022   18241 client.go:171] LocalClient.Create took 656.891294ms
	I0223 13:27:12.233589   18241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:27:12.233727   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:12.290870   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:12.291001   18241 retry.go:31] will retry after 208.808997ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:12.502128   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:12.562309   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:12.562388   18241 retry.go:31] will retry after 274.891128ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:12.838283   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:12.894623   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:12.894703   18241 retry.go:31] will retry after 336.057445ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:13.232660   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:13.288405   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	W0223 13:27:13.288499   18241 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	
	W0223 13:27:13.288514   18241 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:13.288570   18241 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:27:13.288622   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:13.343428   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:13.343526   18241 retry.go:31] will retry after 240.625632ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:13.586158   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:13.657200   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:13.657280   18241 retry.go:31] will retry after 430.841957ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:14.088471   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:14.149454   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:14.149537   18241 retry.go:31] will retry after 334.591165ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:14.484408   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:14.545004   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	W0223 13:27:14.545103   18241 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	
	W0223 13:27:14.545117   18241 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:14.545123   18241 start.go:128] duration metric: createHost completed in 4.992545017s
	I0223 13:27:14.545129   18241 start.go:83] releasing machines lock for "bridge-235000", held for 4.992654911s
	W0223 13:27:14.545144   18241 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for bridge-235000 container: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:27:14.545584   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:14.599203   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:14.599251   18241 delete.go:82] Unable to get host status for bridge-235000, assuming it has already been deleted: state: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	W0223 13:27:14.599409   18241 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for bridge-235000 container: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for bridge-235000 container: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:27:14.599419   18241 start.go:706] Will try again in 5 seconds ...
	I0223 13:27:19.599688   18241 start.go:364] acquiring machines lock for bridge-235000: {Name:mk657d01ef7249ae2c5e4363eef7032a9cbaecaa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:27:19.599845   18241 start.go:368] acquired machines lock for "bridge-235000" in 120.348µs
	I0223 13:27:19.599888   18241 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:27:19.599904   18241 fix.go:55] fixHost starting: 
	I0223 13:27:19.600335   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:19.660452   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:19.660498   18241 fix.go:103] recreateIfNeeded on bridge-235000: state= err=unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:19.660519   18241 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:27:19.682270   18241 out.go:177] * docker "bridge-235000" container is missing, will recreate.
	I0223 13:27:19.703992   18241 delete.go:124] DEMOLISHING bridge-235000 ...
	I0223 13:27:19.704226   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:19.759592   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:27:19.759636   18241 stop.go:75] unable to get state: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:19.759652   18241 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:19.760030   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:19.814770   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:19.814819   18241 delete.go:82] Unable to get host status for bridge-235000, assuming it has already been deleted: state: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:19.814914   18241 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-235000
	W0223 13:27:19.870147   18241 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-235000 returned with exit code 1
	I0223 13:27:19.870178   18241 kic.go:367] could not find the container bridge-235000 to remove it. will try anyways
	I0223 13:27:19.870251   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:19.924648   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:27:19.924690   18241 oci.go:84] error getting container status, will try to delete anyways: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:19.924777   18241 cli_runner.go:164] Run: docker exec --privileged -t bridge-235000 /bin/bash -c "sudo init 0"
	W0223 13:27:19.978879   18241 cli_runner.go:211] docker exec --privileged -t bridge-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:27:19.978916   18241 oci.go:641] error shutdown bridge-235000: docker exec --privileged -t bridge-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:20.981253   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:21.036440   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:21.036484   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:21.036496   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:21.036514   18241 retry.go:31] will retry after 484.979058ms: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:21.521836   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:21.578540   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:21.578583   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:21.578592   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:21.578613   18241 retry.go:31] will retry after 543.210575ms: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:22.124230   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:22.182397   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:22.182456   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:22.182469   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:22.182489   18241 retry.go:31] will retry after 767.793482ms: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:22.952667   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:23.011355   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:23.011398   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:23.011407   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:23.011425   18241 retry.go:31] will retry after 1.177787672s: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:24.191614   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:24.250207   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:24.250252   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:24.250261   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:24.250280   18241 retry.go:31] will retry after 1.394598356s: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:25.647317   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:25.706662   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:25.706706   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:25.706715   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:25.706733   18241 retry.go:31] will retry after 2.580906654s: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:28.289950   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:28.350554   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:28.350600   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:28.350609   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:28.350631   18241 retry.go:31] will retry after 4.16937125s: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:32.522451   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:32.579106   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:32.579147   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:32.579161   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:32.579180   18241 retry.go:31] will retry after 5.716663304s: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:38.298241   18241 cli_runner.go:164] Run: docker container inspect bridge-235000 --format={{.State.Status}}
	W0223 13:27:38.356273   18241 cli_runner.go:211] docker container inspect bridge-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:27:38.356315   18241 oci.go:653] temporary error verifying shutdown: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:38.356323   18241 oci.go:655] temporary error: container bridge-235000 status is  but expect it to be exited
	I0223 13:27:38.356346   18241 oci.go:88] couldn't shut down bridge-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "bridge-235000": docker container inspect bridge-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	 
	I0223 13:27:38.356423   18241 cli_runner.go:164] Run: docker rm -f -v bridge-235000
	I0223 13:27:38.412600   18241 cli_runner.go:164] Run: docker container inspect -f {{.Id}} bridge-235000
	W0223 13:27:38.465950   18241 cli_runner.go:211] docker container inspect -f {{.Id}} bridge-235000 returned with exit code 1
	I0223 13:27:38.466053   18241 cli_runner.go:164] Run: docker network inspect bridge-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:27:38.521296   18241 cli_runner.go:164] Run: docker network rm bridge-235000
	W0223 13:27:38.626268   18241 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:27:38.626286   18241 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:27:39.627979   18241 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:27:39.650085   18241 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:27:39.650321   18241 start.go:159] libmachine.API.Create for "bridge-235000" (driver="docker")
	I0223 13:27:39.650362   18241 client.go:168] LocalClient.Create starting
	I0223 13:27:39.650564   18241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:27:39.650654   18241 main.go:141] libmachine: Decoding PEM data...
	I0223 13:27:39.650683   18241 main.go:141] libmachine: Parsing certificate...
	I0223 13:27:39.650785   18241 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:27:39.650848   18241 main.go:141] libmachine: Decoding PEM data...
	I0223 13:27:39.650876   18241 main.go:141] libmachine: Parsing certificate...
	I0223 13:27:39.672230   18241 cli_runner.go:164] Run: docker network inspect bridge-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:27:39.731050   18241 cli_runner.go:211] docker network inspect bridge-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:27:39.731156   18241 network_create.go:281] running [docker network inspect bridge-235000] to gather additional debugging logs...
	I0223 13:27:39.731174   18241 cli_runner.go:164] Run: docker network inspect bridge-235000
	W0223 13:27:39.785788   18241 cli_runner.go:211] docker network inspect bridge-235000 returned with exit code 1
	I0223 13:27:39.785821   18241 network_create.go:284] error running [docker network inspect bridge-235000]: docker network inspect bridge-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: bridge-235000
	I0223 13:27:39.785832   18241 network_create.go:286] output of [docker network inspect bridge-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: bridge-235000
	
	** /stderr **
	I0223 13:27:39.785916   18241 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:27:39.842275   18241 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:27:39.843757   18241 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:27:39.845231   18241 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:27:39.845643   18241 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00123ada0}
	I0223 13:27:39.845660   18241 network_create.go:123] attempt to create docker network bridge-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:27:39.845745   18241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-235000 bridge-235000
	W0223 13:27:39.900644   18241 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-235000 bridge-235000 returned with exit code 1
	W0223 13:27:39.900678   18241 network_create.go:148] failed to create docker network bridge-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-235000 bridge-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:27:39.900693   18241 network_create.go:115] failed to create docker network bridge-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:27:39.902028   18241 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:27:39.902352   18241 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00123bc00}
	I0223 13:27:39.902362   18241 network_create.go:123] attempt to create docker network bridge-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:27:39.902429   18241 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=bridge-235000 bridge-235000
	I0223 13:27:39.989190   18241 network_create.go:107] docker network bridge-235000 192.168.85.0/24 created
	I0223 13:27:39.989219   18241 kic.go:117] calculated static IP "192.168.85.2" for the "bridge-235000" container
	I0223 13:27:39.989343   18241 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:27:40.047598   18241 cli_runner.go:164] Run: docker volume create bridge-235000 --label name.minikube.sigs.k8s.io=bridge-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:27:40.102516   18241 oci.go:103] Successfully created a docker volume bridge-235000
	I0223 13:27:40.102640   18241 cli_runner.go:164] Run: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:27:40.237660   18241 cli_runner.go:211] docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:27:40.237698   18241 client.go:171] LocalClient.Create took 587.326272ms
	I0223 13:27:42.239013   18241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:27:42.239127   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:42.297889   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:42.297975   18241 retry.go:31] will retry after 332.561423ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:42.631289   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:42.689982   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:42.690074   18241 retry.go:31] will retry after 236.00728ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:42.926607   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:42.985717   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:42.985802   18241 retry.go:31] will retry after 728.929889ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:43.716443   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:43.772712   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	W0223 13:27:43.772808   18241 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	
	W0223 13:27:43.772825   18241 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:43.772884   18241 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:27:43.772938   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:43.826884   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:43.826970   18241 retry.go:31] will retry after 355.304974ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:44.184224   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:44.242326   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:44.242411   18241 retry.go:31] will retry after 490.503981ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:44.735341   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:44.793830   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:44.793916   18241 retry.go:31] will retry after 632.424132ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:45.428770   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:45.485133   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	W0223 13:27:45.485237   18241 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	
	W0223 13:27:45.485254   18241 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:45.485270   18241 start.go:128] duration metric: createHost completed in 5.857255324s
	I0223 13:27:45.485342   18241 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:27:45.485406   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:45.539506   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:45.539596   18241 retry.go:31] will retry after 227.858428ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:45.769899   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:45.830766   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:45.830854   18241 retry.go:31] will retry after 205.538467ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:46.037045   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:46.094490   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:46.094581   18241 retry.go:31] will retry after 517.257742ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:46.612310   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:46.668449   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:46.668534   18241 retry.go:31] will retry after 535.544445ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:47.206495   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:47.265842   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	W0223 13:27:47.265947   18241 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	
	W0223 13:27:47.265962   18241 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:47.266030   18241 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:27:47.266080   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:47.321188   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:47.321282   18241 retry.go:31] will retry after 232.196964ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:47.555909   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:47.614384   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:47.614464   18241 retry.go:31] will retry after 301.796716ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:47.917751   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:47.974862   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	I0223 13:27:47.974957   18241 retry.go:31] will retry after 673.322974ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:48.648880   18241 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000
	W0223 13:27:48.708612   18241 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000 returned with exit code 1
	W0223 13:27:48.708699   18241 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	
	W0223 13:27:48.708716   18241 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "bridge-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" bridge-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: bridge-235000
	I0223 13:27:48.708720   18241 fix.go:57] fixHost completed within 29.10874954s
	I0223 13:27:48.708727   18241 start.go:83] releasing machines lock for "bridge-235000", held for 29.108803589s
	W0223 13:27:48.708863   18241 out.go:239] * Failed to start docker container. Running "minikube delete -p bridge-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for bridge-235000 container: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p bridge-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for bridge-235000 container: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:27:48.751834   18241 out.go:177] 
	W0223 13:27:48.772975   18241 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for bridge-235000 container: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for bridge-235000 container: docker run --rm --name bridge-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=bridge-235000 --entrypoint /usr/bin/test -v bridge-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:27:48.773012   18241 out.go:239] * 
	* 
	W0223 13:27:48.774359   18241 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:27:48.859550   18241 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/bridge/Start (40.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (40.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker 
net_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubenet-235000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker : exit status 80 (40.474836304s)

                                                
                                                
-- stdout --
	* [kubenet-235000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubenet-235000 in cluster kubenet-235000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	* docker "kubenet-235000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=3072MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:27:57.190830   18658 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:27:57.191011   18658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:27:57.191017   18658 out.go:309] Setting ErrFile to fd 2...
	I0223 13:27:57.191021   18658 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:27:57.191141   18658 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:27:57.192490   18658 out.go:303] Setting JSON to false
	I0223 13:27:57.210765   18658 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3452,"bootTime":1677184225,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:27:57.210833   18658 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:27:57.233308   18658 out.go:177] * [kubenet-235000] minikube v1.29.0 on Darwin 13.2
	I0223 13:27:57.274951   18658 notify.go:220] Checking for updates...
	I0223 13:27:57.296489   18658 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:27:57.318412   18658 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:27:57.339956   18658 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:27:57.361168   18658 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:27:57.382182   18658 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:27:57.403022   18658 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:27:57.425037   18658 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:27:57.425212   18658 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:27:57.425346   18658 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:27:57.425409   18658 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:27:57.485397   18658 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:27:57.485547   18658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:27:57.627390   18658 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:27:57.536363989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:27:57.669848   18658 out.go:177] * Using the docker driver based on user configuration
	I0223 13:27:57.691000   18658 start.go:296] selected driver: docker
	I0223 13:27:57.691030   18658 start.go:857] validating driver "docker" against <nil>
	I0223 13:27:57.691057   18658 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:27:57.694852   18658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:27:57.837547   18658 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:27:57.745213017 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:27:57.837681   18658 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:27:57.837862   18658 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:27:57.860158   18658 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:27:57.881378   18658 cni.go:80] network plugin configured as "kubenet", returning disabled
	I0223 13:27:57.881405   18658 start_flags.go:319] config:
	{Name:kubenet-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubenet-235000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:27:57.924207   18658 out.go:177] * Starting control plane node kubenet-235000 in cluster kubenet-235000
	I0223 13:27:57.945336   18658 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:27:57.966003   18658 out.go:177] * Pulling base image ...
	I0223 13:27:58.007103   18658 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:27:58.007149   18658 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:27:58.007161   18658 cache.go:57] Caching tarball of preloaded images
	I0223 13:27:58.007163   18658 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:27:58.007281   18658 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:27:58.007290   18658 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:27:58.007802   18658 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/kubenet-235000/config.json ...
	I0223 13:27:58.007896   18658 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/kubenet-235000/config.json: {Name:mk0d10901c5ea2ad0f16fcbc770d394848648767 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:27:58.063579   18658 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:27:58.063596   18658 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:27:58.063626   18658 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:27:58.063669   18658 start.go:364] acquiring machines lock for kubenet-235000: {Name:mkc78426a8b2a0802758ec4575e65d30d66ab0e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:27:58.063826   18658 start.go:368] acquired machines lock for "kubenet-235000" in 145.301µs
	I0223 13:27:58.063862   18658 start.go:93] Provisioning new machine with config: &{Name:kubenet-235000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:kubenet-235000 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:27:58.063926   18658 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:27:58.085775   18658 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:27:58.086185   18658 start.go:159] libmachine.API.Create for "kubenet-235000" (driver="docker")
	I0223 13:27:58.086235   18658 client.go:168] LocalClient.Create starting
	I0223 13:27:58.086506   18658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:27:58.086603   18658 main.go:141] libmachine: Decoding PEM data...
	I0223 13:27:58.086637   18658 main.go:141] libmachine: Parsing certificate...
	I0223 13:27:58.086757   18658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:27:58.086827   18658 main.go:141] libmachine: Decoding PEM data...
	I0223 13:27:58.086874   18658 main.go:141] libmachine: Parsing certificate...
	I0223 13:27:58.107782   18658 cli_runner.go:164] Run: docker network inspect kubenet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:27:58.163542   18658 cli_runner.go:211] docker network inspect kubenet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:27:58.163653   18658 network_create.go:281] running [docker network inspect kubenet-235000] to gather additional debugging logs...
	I0223 13:27:58.163672   18658 cli_runner.go:164] Run: docker network inspect kubenet-235000
	W0223 13:27:58.219358   18658 cli_runner.go:211] docker network inspect kubenet-235000 returned with exit code 1
	I0223 13:27:58.219385   18658 network_create.go:284] error running [docker network inspect kubenet-235000]: docker network inspect kubenet-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-235000
	I0223 13:27:58.219395   18658 network_create.go:286] output of [docker network inspect kubenet-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-235000
	
	** /stderr **
	I0223 13:27:58.219478   18658 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:27:58.279121   18658 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:27:58.279440   18658 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d54c00}
	I0223 13:27:58.279454   18658 network_create.go:123] attempt to create docker network kubenet-235000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:27:58.279523   18658 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-235000 kubenet-235000
	W0223 13:27:58.335085   18658 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-235000 kubenet-235000 returned with exit code 1
	W0223 13:27:58.335126   18658 network_create.go:148] failed to create docker network kubenet-235000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-235000 kubenet-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:27:58.335145   18658 network_create.go:115] failed to create docker network kubenet-235000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:27:58.336455   18658 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:27:58.336768   18658 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d55a60}
	I0223 13:27:58.336779   18658 network_create.go:123] attempt to create docker network kubenet-235000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:27:58.336851   18658 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-235000 kubenet-235000
	I0223 13:27:58.424175   18658 network_create.go:107] docker network kubenet-235000 192.168.67.0/24 created
	I0223 13:27:58.424207   18658 kic.go:117] calculated static IP "192.168.67.2" for the "kubenet-235000" container
	I0223 13:27:58.424324   18658 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:27:58.482271   18658 cli_runner.go:164] Run: docker volume create kubenet-235000 --label name.minikube.sigs.k8s.io=kubenet-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:27:58.538223   18658 oci.go:103] Successfully created a docker volume kubenet-235000
	I0223 13:27:58.538337   18658 cli_runner.go:164] Run: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:27:58.752844   18658 cli_runner.go:211] docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:27:58.752884   18658 client.go:171] LocalClient.Create took 666.635192ms
	I0223 13:28:00.753380   18658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:28:00.753525   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:00.812301   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:00.812428   18658 retry.go:31] will retry after 310.682283ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:01.123785   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:01.181325   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:01.181412   18658 retry.go:31] will retry after 468.230098ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:01.651980   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:01.712378   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:01.712464   18658 retry.go:31] will retry after 823.399821ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:02.536696   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:02.593691   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	W0223 13:28:02.593793   18658 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	
	W0223 13:28:02.593816   18658 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:02.593875   18658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:28:02.593932   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:02.648098   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:02.648189   18658 retry.go:31] will retry after 221.020795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:02.871578   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:02.929631   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:02.929719   18658 retry.go:31] will retry after 294.88019ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:03.226936   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:03.284803   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:03.284894   18658 retry.go:31] will retry after 506.936412ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:03.794230   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:03.855075   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	W0223 13:28:03.855175   18658 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	
	W0223 13:28:03.855190   18658 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:03.855197   18658 start.go:128] duration metric: createHost completed in 5.791253947s
	I0223 13:28:03.855203   18658 start.go:83] releasing machines lock for "kubenet-235000", held for 5.79135664s
	W0223 13:28:03.855218   18658 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for kubenet-235000 container: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:28:03.855648   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:03.909258   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:03.909301   18658 delete.go:82] Unable to get host status for kubenet-235000, assuming it has already been deleted: state: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	W0223 13:28:03.909441   18658 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kubenet-235000 container: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for kubenet-235000 container: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:28:03.909450   18658 start.go:706] Will try again in 5 seconds ...
	I0223 13:28:08.910417   18658 start.go:364] acquiring machines lock for kubenet-235000: {Name:mkc78426a8b2a0802758ec4575e65d30d66ab0e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:28:08.910585   18658 start.go:368] acquired machines lock for "kubenet-235000" in 131.611µs
	I0223 13:28:08.910631   18658 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:28:08.910645   18658 fix.go:55] fixHost starting: 
	I0223 13:28:08.911066   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:08.969572   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:08.969612   18658 fix.go:103] recreateIfNeeded on kubenet-235000: state= err=unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:08.969635   18658 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:28:08.991357   18658 out.go:177] * docker "kubenet-235000" container is missing, will recreate.
	I0223 13:28:09.013090   18658 delete.go:124] DEMOLISHING kubenet-235000 ...
	I0223 13:28:09.013282   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:09.069601   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:28:09.069656   18658 stop.go:75] unable to get state: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:09.069673   18658 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:09.070047   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:09.124456   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:09.124500   18658 delete.go:82] Unable to get host status for kubenet-235000, assuming it has already been deleted: state: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:09.124578   18658 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-235000
	W0223 13:28:09.179523   18658 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-235000 returned with exit code 1
	I0223 13:28:09.179555   18658 kic.go:367] could not find the container kubenet-235000 to remove it. will try anyways
	I0223 13:28:09.179630   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:09.233028   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	W0223 13:28:09.233073   18658 oci.go:84] error getting container status, will try to delete anyways: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:09.233151   18658 cli_runner.go:164] Run: docker exec --privileged -t kubenet-235000 /bin/bash -c "sudo init 0"
	W0223 13:28:09.287690   18658 cli_runner.go:211] docker exec --privileged -t kubenet-235000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:28:09.287722   18658 oci.go:641] error shutdown kubenet-235000: docker exec --privileged -t kubenet-235000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:10.289337   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:10.350339   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:10.350382   18658 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:10.350389   18658 oci.go:655] temporary error: container kubenet-235000 status is  but expect it to be exited
	I0223 13:28:10.350408   18658 retry.go:31] will retry after 662.09841ms: couldn't verify container is exited. %v: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:11.014458   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:11.073241   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:11.073294   18658 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:11.073306   18658 oci.go:655] temporary error: container kubenet-235000 status is  but expect it to be exited
	I0223 13:28:11.073328   18658 retry.go:31] will retry after 752.760711ms: couldn't verify container is exited. %v: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:11.826640   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:11.885043   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:11.885084   18658 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:11.885091   18658 oci.go:655] temporary error: container kubenet-235000 status is  but expect it to be exited
	I0223 13:28:11.885113   18658 retry.go:31] will retry after 689.903886ms: couldn't verify container is exited. %v: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:12.577555   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:12.636325   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:12.636375   18658 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:12.636383   18658 oci.go:655] temporary error: container kubenet-235000 status is  but expect it to be exited
	I0223 13:28:12.636403   18658 retry.go:31] will retry after 2.276868615s: couldn't verify container is exited. %v: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:14.914010   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:14.970798   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:14.970842   18658 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:14.970850   18658 oci.go:655] temporary error: container kubenet-235000 status is  but expect it to be exited
	I0223 13:28:14.970870   18658 retry.go:31] will retry after 2.243958016s: couldn't verify container is exited. %v: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:17.215226   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:17.271268   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:17.271322   18658 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:17.271330   18658 oci.go:655] temporary error: container kubenet-235000 status is  but expect it to be exited
	I0223 13:28:17.271350   18658 retry.go:31] will retry after 4.785444569s: couldn't verify container is exited. %v: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:22.057463   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:22.113098   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:22.113143   18658 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:22.113151   18658 oci.go:655] temporary error: container kubenet-235000 status is  but expect it to be exited
	I0223 13:28:22.113171   18658 retry.go:31] will retry after 4.628741739s: couldn't verify container is exited. %v: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:26.743352   18658 cli_runner.go:164] Run: docker container inspect kubenet-235000 --format={{.State.Status}}
	W0223 13:28:26.802592   18658 cli_runner.go:211] docker container inspect kubenet-235000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:26.802634   18658 oci.go:653] temporary error verifying shutdown: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:26.802646   18658 oci.go:655] temporary error: container kubenet-235000 status is  but expect it to be exited
	I0223 13:28:26.802672   18658 oci.go:88] couldn't shut down kubenet-235000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "kubenet-235000": docker container inspect kubenet-235000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	 
	I0223 13:28:26.802750   18658 cli_runner.go:164] Run: docker rm -f -v kubenet-235000
	I0223 13:28:26.861229   18658 cli_runner.go:164] Run: docker container inspect -f {{.Id}} kubenet-235000
	W0223 13:28:26.914923   18658 cli_runner.go:211] docker container inspect -f {{.Id}} kubenet-235000 returned with exit code 1
	I0223 13:28:26.915034   18658 cli_runner.go:164] Run: docker network inspect kubenet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:28:26.969921   18658 cli_runner.go:164] Run: docker network rm kubenet-235000
	W0223 13:28:27.132275   18658 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:28:27.132308   18658 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:28:28.134553   18658 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:28:28.156372   18658 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0223 13:28:28.156596   18658 start.go:159] libmachine.API.Create for "kubenet-235000" (driver="docker")
	I0223 13:28:28.156635   18658 client.go:168] LocalClient.Create starting
	I0223 13:28:28.156816   18658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:28:28.156901   18658 main.go:141] libmachine: Decoding PEM data...
	I0223 13:28:28.156928   18658 main.go:141] libmachine: Parsing certificate...
	I0223 13:28:28.157054   18658 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:28:28.157124   18658 main.go:141] libmachine: Decoding PEM data...
	I0223 13:28:28.157139   18658 main.go:141] libmachine: Parsing certificate...
	I0223 13:28:28.177251   18658 cli_runner.go:164] Run: docker network inspect kubenet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:28:28.237888   18658 cli_runner.go:211] docker network inspect kubenet-235000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:28:28.237988   18658 network_create.go:281] running [docker network inspect kubenet-235000] to gather additional debugging logs...
	I0223 13:28:28.238008   18658 cli_runner.go:164] Run: docker network inspect kubenet-235000
	W0223 13:28:28.292157   18658 cli_runner.go:211] docker network inspect kubenet-235000 returned with exit code 1
	I0223 13:28:28.292186   18658 network_create.go:284] error running [docker network inspect kubenet-235000]: docker network inspect kubenet-235000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-235000
	I0223 13:28:28.292198   18658 network_create.go:286] output of [docker network inspect kubenet-235000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-235000
	
	** /stderr **
	I0223 13:28:28.292284   18658 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:28:28.349112   18658 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:28:28.350602   18658 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:28:28.351891   18658 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:28:28.352184   18658 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00150ee70}
	I0223 13:28:28.352194   18658 network_create.go:123] attempt to create docker network kubenet-235000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:28:28.352267   18658 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-235000 kubenet-235000
	W0223 13:28:28.406916   18658 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-235000 kubenet-235000 returned with exit code 1
	W0223 13:28:28.406948   18658 network_create.go:148] failed to create docker network kubenet-235000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-235000 kubenet-235000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:28:28.406963   18658 network_create.go:115] failed to create docker network kubenet-235000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:28:28.408290   18658 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:28:28.408625   18658 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00117b280}
	I0223 13:28:28.408636   18658 network_create.go:123] attempt to create docker network kubenet-235000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:28:28.408705   18658 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubenet-235000 kubenet-235000
	I0223 13:28:28.496215   18658 network_create.go:107] docker network kubenet-235000 192.168.85.0/24 created
	I0223 13:28:28.496244   18658 kic.go:117] calculated static IP "192.168.85.2" for the "kubenet-235000" container
	I0223 13:28:28.496359   18658 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:28:28.553415   18658 cli_runner.go:164] Run: docker volume create kubenet-235000 --label name.minikube.sigs.k8s.io=kubenet-235000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:28:28.607812   18658 oci.go:103] Successfully created a docker volume kubenet-235000
	I0223 13:28:28.607947   18658 cli_runner.go:164] Run: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:28:28.739805   18658 cli_runner.go:211] docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:28:28.739846   18658 client.go:171] LocalClient.Create took 583.202259ms
	I0223 13:28:30.740153   18658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:28:30.740254   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:30.796811   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:30.796901   18658 retry.go:31] will retry after 294.645258ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:31.092663   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:31.152123   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:31.152210   18658 retry.go:31] will retry after 393.603551ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:31.548031   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:31.606944   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:31.607031   18658 retry.go:31] will retry after 459.598611ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:32.069031   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:32.129076   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	W0223 13:28:32.129172   18658 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	
	W0223 13:28:32.129187   18658 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:32.129252   18658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:28:32.129307   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:32.184693   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:32.184782   18658 retry.go:31] will retry after 286.520207ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:32.472193   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:32.532863   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:32.532951   18658 retry.go:31] will retry after 546.434135ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:33.081019   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:33.140696   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:33.140800   18658 retry.go:31] will retry after 347.336518ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:33.490443   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:33.550771   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:33.550873   18658 retry.go:31] will retry after 544.702081ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:34.098009   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:34.157451   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	W0223 13:28:34.157548   18658 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	
	W0223 13:28:34.157566   18658 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:34.157581   18658 start.go:128] duration metric: createHost completed in 6.022993052s
	I0223 13:28:34.157661   18658 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:28:34.157711   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:34.211986   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:34.212070   18658 retry.go:31] will retry after 201.974076ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:34.416489   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:34.477697   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:34.477788   18658 retry.go:31] will retry after 532.744373ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:35.012706   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:35.068567   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:35.068669   18658 retry.go:31] will retry after 815.673725ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:35.884662   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:35.940862   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	W0223 13:28:35.940960   18658 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	
	W0223 13:28:35.940975   18658 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:35.941033   18658 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:28:35.941082   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:35.995190   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:35.995283   18658 retry.go:31] will retry after 339.245123ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:36.336373   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:36.391750   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:36.391837   18658 retry.go:31] will retry after 492.46268ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:36.885031   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:36.941461   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	I0223 13:28:36.941544   18658 retry.go:31] will retry after 435.065419ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:37.377729   18658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000
	W0223 13:28:37.433333   18658 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000 returned with exit code 1
	W0223 13:28:37.433429   18658 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	
	W0223 13:28:37.433444   18658 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "kubenet-235000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-235000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: kubenet-235000
	I0223 13:28:37.433450   18658 fix.go:57] fixHost completed within 28.522739241s
	I0223 13:28:37.433457   18658 start.go:83] releasing machines lock for "kubenet-235000", held for 28.522792321s
	W0223 13:28:37.433593   18658 out.go:239] * Failed to start docker container. Running "minikube delete -p kubenet-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kubenet-235000 container: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p kubenet-235000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for kubenet-235000 container: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:28:37.477078   18658 out.go:177] 
	W0223 13:28:37.498132   18658 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kubenet-235000 container: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for kubenet-235000 container: docker run --rm --name kubenet-235000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-235000 --entrypoint /usr/bin/test -v kubenet-235000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:28:37.498162   18658 out.go:239] * 
	* 
	W0223 13:28:37.499430   18658 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:28:37.598910   18658 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:113: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (40.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (38.63s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-639000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E0223 13:29:17.397168    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-639000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 80 (38.456561709s)

                                                
                                                
-- stdout --
	* [old-k8s-version-639000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-639000 in cluster old-k8s-version-639000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-639000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:28:45.872378   19069 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:28:45.872536   19069 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:28:45.872541   19069 out.go:309] Setting ErrFile to fd 2...
	I0223 13:28:45.872545   19069 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:28:45.872650   19069 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:28:45.873984   19069 out.go:303] Setting JSON to false
	I0223 13:28:45.892261   19069 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3500,"bootTime":1677184225,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:28:45.892359   19069 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:28:45.913578   19069 out.go:177] * [old-k8s-version-639000] minikube v1.29.0 on Darwin 13.2
	I0223 13:28:45.955399   19069 notify.go:220] Checking for updates...
	I0223 13:28:45.976564   19069 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:28:45.997414   19069 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:28:46.018485   19069 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:28:46.060298   19069 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:28:46.102446   19069 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:28:46.123692   19069 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:28:46.146388   19069 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:28:46.146602   19069 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:28:46.146748   19069 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:28:46.146812   19069 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:28:46.209293   19069 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:28:46.209399   19069 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:28:46.351937   19069 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:28:46.259465688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:28:46.374116   19069 out.go:177] * Using the docker driver based on user configuration
	I0223 13:28:46.395545   19069 start.go:296] selected driver: docker
	I0223 13:28:46.395580   19069 start.go:857] validating driver "docker" against <nil>
	I0223 13:28:46.395602   19069 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:28:46.399455   19069 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:28:46.543052   19069 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:28:46.450281551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:28:46.543153   19069 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:28:46.543320   19069 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:28:46.564723   19069 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:28:46.585718   19069 cni.go:84] Creating CNI manager for ""
	I0223 13:28:46.585757   19069 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 13:28:46.585773   19069 start_flags.go:319] config:
	{Name:old-k8s-version-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-639000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:28:46.628672   19069 out.go:177] * Starting control plane node old-k8s-version-639000 in cluster old-k8s-version-639000
	I0223 13:28:46.649865   19069 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:28:46.670650   19069 out.go:177] * Pulling base image ...
	I0223 13:28:46.712623   19069 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 13:28:46.712634   19069 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:28:46.712709   19069 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 13:28:46.712727   19069 cache.go:57] Caching tarball of preloaded images
	I0223 13:28:46.712936   19069 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:28:46.712953   19069 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 13:28:46.713932   19069 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/old-k8s-version-639000/config.json ...
	I0223 13:28:46.714030   19069 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/old-k8s-version-639000/config.json: {Name:mk6de6ab6aa1adebc5b28b2b3fb1e738ac5979af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:28:46.769357   19069 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:28:46.769378   19069 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:28:46.769405   19069 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:28:46.769455   19069 start.go:364] acquiring machines lock for old-k8s-version-639000: {Name:mk9cf1c4e3e710c0d1f8a7c5776e720012b688ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:28:46.769607   19069 start.go:368] acquired machines lock for "old-k8s-version-639000" in 141.592µs
	I0223 13:28:46.769643   19069 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-639000 Namespace:default APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:28:46.769702   19069 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:28:46.791173   19069 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:28:46.791390   19069 start.go:159] libmachine.API.Create for "old-k8s-version-639000" (driver="docker")
	I0223 13:28:46.791422   19069 client.go:168] LocalClient.Create starting
	I0223 13:28:46.791523   19069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:28:46.791569   19069 main.go:141] libmachine: Decoding PEM data...
	I0223 13:28:46.791588   19069 main.go:141] libmachine: Parsing certificate...
	I0223 13:28:46.791664   19069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:28:46.791702   19069 main.go:141] libmachine: Decoding PEM data...
	I0223 13:28:46.791712   19069 main.go:141] libmachine: Parsing certificate...
	I0223 13:28:46.812182   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:28:46.866955   19069 cli_runner.go:211] docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:28:46.867051   19069 network_create.go:281] running [docker network inspect old-k8s-version-639000] to gather additional debugging logs...
	I0223 13:28:46.867067   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000
	W0223 13:28:46.920896   19069 cli_runner.go:211] docker network inspect old-k8s-version-639000 returned with exit code 1
	I0223 13:28:46.920921   19069 network_create.go:284] error running [docker network inspect old-k8s-version-639000]: docker network inspect old-k8s-version-639000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-639000
	I0223 13:28:46.920932   19069 network_create.go:286] output of [docker network inspect old-k8s-version-639000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-639000
	
	** /stderr **
	I0223 13:28:46.921014   19069 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:28:46.977281   19069 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:28:46.977601   19069 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000445200}
	I0223 13:28:46.977614   19069 network_create.go:123] attempt to create docker network old-k8s-version-639000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:28:46.977691   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000
	W0223 13:28:47.085875   19069 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000 returned with exit code 1
	W0223 13:28:47.085917   19069 network_create.go:148] failed to create docker network old-k8s-version-639000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:28:47.085932   19069 network_create.go:115] failed to create docker network old-k8s-version-639000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:28:47.087328   19069 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:28:47.087646   19069 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003a62d0}
	I0223 13:28:47.087663   19069 network_create.go:123] attempt to create docker network old-k8s-version-639000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:28:47.087737   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000
	I0223 13:28:47.175330   19069 network_create.go:107] docker network old-k8s-version-639000 192.168.67.0/24 created
	I0223 13:28:47.175366   19069 kic.go:117] calculated static IP "192.168.67.2" for the "old-k8s-version-639000" container
	I0223 13:28:47.175472   19069 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:28:47.232884   19069 cli_runner.go:164] Run: docker volume create old-k8s-version-639000 --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:28:47.288545   19069 oci.go:103] Successfully created a docker volume old-k8s-version-639000
	I0223 13:28:47.288680   19069 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:28:47.502975   19069 cli_runner.go:211] docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:28:47.503021   19069 client.go:171] LocalClient.Create took 711.589628ms
	I0223 13:28:49.504850   19069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:28:49.505004   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:28:49.560300   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:28:49.560439   19069 retry.go:31] will retry after 368.649186ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:49.931070   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:28:49.987707   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:28:49.987797   19069 retry.go:31] will retry after 302.448724ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:50.291798   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:28:50.347919   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:28:50.348013   19069 retry.go:31] will retry after 742.750741ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:51.092315   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:28:51.148967   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:28:51.149059   19069 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:28:51.149084   19069 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:51.149145   19069 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:28:51.149199   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:28:51.204656   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:28:51.204739   19069 retry.go:31] will retry after 233.109117ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:51.439032   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:28:51.519564   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:28:51.519660   19069 retry.go:31] will retry after 468.008689ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:51.989516   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:28:52.045117   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:28:52.045218   19069 retry.go:31] will retry after 596.189412ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:52.642821   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:28:52.699772   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:28:52.699867   19069 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:28:52.699885   19069 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:52.699900   19069 start.go:128] duration metric: createHost completed in 5.930180101s
	I0223 13:28:52.699907   19069 start.go:83] releasing machines lock for "old-k8s-version-639000", held for 5.930278051s
	W0223 13:28:52.699922   19069 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:28:52.700356   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:28:52.755238   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:52.755290   19069 delete.go:82] Unable to get host status for old-k8s-version-639000, assuming it has already been deleted: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	W0223 13:28:52.755459   19069 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:28:52.755472   19069 start.go:706] Will try again in 5 seconds ...
	I0223 13:28:57.756653   19069 start.go:364] acquiring machines lock for old-k8s-version-639000: {Name:mk9cf1c4e3e710c0d1f8a7c5776e720012b688ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:28:57.756824   19069 start.go:368] acquired machines lock for "old-k8s-version-639000" in 134.508µs
	I0223 13:28:57.756866   19069 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:28:57.756880   19069 fix.go:55] fixHost starting: 
	I0223 13:28:57.757317   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:28:57.814313   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:57.814355   19069 fix.go:103] recreateIfNeeded on old-k8s-version-639000: state= err=unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:57.814373   19069 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:28:57.856989   19069 out.go:177] * docker "old-k8s-version-639000" container is missing, will recreate.
	I0223 13:28:57.877905   19069 delete.go:124] DEMOLISHING old-k8s-version-639000 ...
	I0223 13:28:57.878128   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:28:57.934586   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:28:57.934630   19069 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:57.934643   19069 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:57.935032   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:28:57.988582   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:57.988625   19069 delete.go:82] Unable to get host status for old-k8s-version-639000, assuming it has already been deleted: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:57.988706   19069 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-639000
	W0223 13:28:58.043988   19069 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-639000 returned with exit code 1
	I0223 13:28:58.044017   19069 kic.go:367] could not find the container old-k8s-version-639000 to remove it. will try anyways
	I0223 13:28:58.044093   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:28:58.099712   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:28:58.099755   19069 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:58.099843   19069 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0"
	W0223 13:28:58.154576   19069 cli_runner.go:211] docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:28:58.154608   19069 oci.go:641] error shutdown old-k8s-version-639000: docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:59.156736   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:28:59.212398   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:59.212443   19069 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:59.212450   19069 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:28:59.212476   19069 retry.go:31] will retry after 334.85783ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:59.548543   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:28:59.604213   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:28:59.604257   19069 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:28:59.604266   19069 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:28:59.604285   19069 retry.go:31] will retry after 1.015706157s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:00.620448   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:00.676076   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:00.676117   19069 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:00.676124   19069 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:00.676144   19069 retry.go:31] will retry after 1.39684262s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:02.074719   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:02.130636   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:02.130688   19069 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:02.130696   19069 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:02.130715   19069 retry.go:31] will retry after 1.865467868s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:03.998596   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:04.054786   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:04.054830   19069 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:04.054840   19069 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:04.054861   19069 retry.go:31] will retry after 1.307918353s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:05.365048   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:05.421841   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:05.421885   19069 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:05.421893   19069 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:05.421912   19069 retry.go:31] will retry after 2.552500824s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:07.976683   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:08.033595   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:08.033638   19069 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:08.033645   19069 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:08.033666   19069 retry.go:31] will retry after 6.162525591s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:14.197112   19069 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:14.253303   19069 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:14.253348   19069 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:14.253355   19069 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:14.253380   19069 oci.go:88] couldn't shut down old-k8s-version-639000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	 
	I0223 13:29:14.253458   19069 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-639000
	I0223 13:29:14.310561   19069 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-639000
	W0223 13:29:14.364778   19069 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-639000 returned with exit code 1
	I0223 13:29:14.364898   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:29:14.420135   19069 cli_runner.go:164] Run: docker network rm old-k8s-version-639000
	W0223 13:29:14.521447   19069 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:29:14.521467   19069 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:29:15.522614   19069 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:29:15.544918   19069 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:29:15.545106   19069 start.go:159] libmachine.API.Create for "old-k8s-version-639000" (driver="docker")
	I0223 13:29:15.545154   19069 client.go:168] LocalClient.Create starting
	I0223 13:29:15.545315   19069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:29:15.545377   19069 main.go:141] libmachine: Decoding PEM data...
	I0223 13:29:15.545395   19069 main.go:141] libmachine: Parsing certificate...
	I0223 13:29:15.545457   19069 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:29:15.545501   19069 main.go:141] libmachine: Decoding PEM data...
	I0223 13:29:15.545515   19069 main.go:141] libmachine: Parsing certificate...
	I0223 13:29:15.545985   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:29:15.601138   19069 cli_runner.go:211] docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:29:15.601228   19069 network_create.go:281] running [docker network inspect old-k8s-version-639000] to gather additional debugging logs...
	I0223 13:29:15.601246   19069 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000
	W0223 13:29:15.655846   19069 cli_runner.go:211] docker network inspect old-k8s-version-639000 returned with exit code 1
	I0223 13:29:15.655871   19069 network_create.go:284] error running [docker network inspect old-k8s-version-639000]: docker network inspect old-k8s-version-639000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-639000
	I0223 13:29:15.655884   19069 network_create.go:286] output of [docker network inspect old-k8s-version-639000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-639000
	
	** /stderr **
	I0223 13:29:15.655970   19069 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:29:15.711908   19069 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:29:15.713458   19069 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:29:15.714978   19069 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:29:15.715287   19069 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001001b80}
	I0223 13:29:15.715305   19069 network_create.go:123] attempt to create docker network old-k8s-version-639000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:29:15.715378   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000
	W0223 13:29:15.770956   19069 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000 returned with exit code 1
	W0223 13:29:15.770989   19069 network_create.go:148] failed to create docker network old-k8s-version-639000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:29:15.771002   19069 network_create.go:115] failed to create docker network old-k8s-version-639000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:29:15.772339   19069 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:29:15.772638   19069 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f35a50}
	I0223 13:29:15.772648   19069 network_create.go:123] attempt to create docker network old-k8s-version-639000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:29:15.772717   19069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000
	I0223 13:29:15.860066   19069 network_create.go:107] docker network old-k8s-version-639000 192.168.85.0/24 created
	I0223 13:29:15.860098   19069 kic.go:117] calculated static IP "192.168.85.2" for the "old-k8s-version-639000" container
	I0223 13:29:15.860214   19069 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:29:15.917816   19069 cli_runner.go:164] Run: docker volume create old-k8s-version-639000 --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:29:15.972886   19069 oci.go:103] Successfully created a docker volume old-k8s-version-639000
	I0223 13:29:15.973005   19069 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:29:16.110035   19069 cli_runner.go:211] docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:29:16.110076   19069 client.go:171] LocalClient.Create took 564.9153ms
	I0223 13:29:18.110861   19069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:29:18.111001   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:18.167564   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:18.167652   19069 retry.go:31] will retry after 253.763915ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:18.422543   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:18.479345   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:18.479434   19069 retry.go:31] will retry after 480.113473ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:18.960672   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:19.016461   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:19.016565   19069 retry.go:31] will retry after 349.591579ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:19.366592   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:19.422614   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:29:19.422722   19069 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:29:19.422736   19069 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:19.422798   19069 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:29:19.422848   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:19.476634   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:19.476732   19069 retry.go:31] will retry after 167.732022ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:19.646503   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:19.702040   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:19.702137   19069 retry.go:31] will retry after 341.48745ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:20.045137   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:20.102855   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:20.102952   19069 retry.go:31] will retry after 766.419909ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:20.870952   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:20.927930   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:29:20.928022   19069 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:29:20.928037   19069 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:20.928042   19069 start.go:128] duration metric: createHost completed in 5.405394376s
	I0223 13:29:20.928108   19069 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:29:20.928165   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:20.982434   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:20.982523   19069 retry.go:31] will retry after 274.697516ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:21.258362   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:21.314122   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:21.314210   19069 retry.go:31] will retry after 346.585602ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:21.662242   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:21.718928   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:21.719026   19069 retry.go:31] will retry after 559.986127ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:22.279986   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:22.337397   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:29:22.337488   19069 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:29:22.337512   19069 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:22.337577   19069 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:29:22.337629   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:22.391576   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:22.391660   19069 retry.go:31] will retry after 255.333477ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:22.647239   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:22.702379   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:22.702476   19069 retry.go:31] will retry after 480.539778ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:23.184864   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:23.240812   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:23.240898   19069 retry.go:31] will retry after 302.699216ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:23.544668   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:23.602171   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:29:23.602258   19069 retry.go:31] will retry after 450.403481ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:24.054713   19069 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:29:24.111752   19069 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:29:24.111858   19069 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:29:24.111872   19069 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:24.111876   19069 fix.go:57] fixHost completed within 26.354935873s
	I0223 13:29:24.111883   19069 start.go:83] releasing machines lock for "old-k8s-version-639000", held for 26.354985272s
	W0223 13:29:24.112009   19069 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-639000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-639000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:29:24.155346   19069 out.go:177] 
	W0223 13:29:24.176807   19069 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:29:24.176834   19069 out.go:239] * 
	* 
	W0223 13:29:24.178171   19069 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:29:24.240552   19069 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-639000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "5843ad7eb24ec35d942a1698eb07002ce3f498eae2a0c3e456e82e340ebd0642",
	        "Created": "2023-02-23T21:29:15.822881818Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (100.343154ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:29:24.454949   19278 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (38.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-639000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-639000 create -f testdata/busybox.yaml: exit status 1 (34.253046ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-639000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-639000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "5843ad7eb24ec35d942a1698eb07002ce3f498eae2a0c3e456e82e340ebd0642",
	        "Created": "2023-02-23T21:29:15.822881818Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (101.078019ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:29:24.648219   19285 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "5843ad7eb24ec35d942a1698eb07002ce3f498eae2a0c3e456e82e340ebd0642",
	        "Created": "2023-02-23T21:29:15.822881818Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (99.927666ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:29:24.807502   19293 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-639000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-639000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-639000 describe deploy/metrics-server -n kube-system: exit status 1 (35.010014ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-639000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-639000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "5843ad7eb24ec35d942a1698eb07002ce3f498eae2a0c3e456e82e340ebd0642",
	        "Created": "2023-02-23T21:29:15.822881818Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (101.208034ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:29:25.226260   19306 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (14.84s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-639000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p old-k8s-version-639000 --alsologtostderr -v=3: exit status 82 (14.675817372s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-639000"  ...
	* Stopping node "old-k8s-version-639000"  ...
	* Stopping node "old-k8s-version-639000"  ...
	* Stopping node "old-k8s-version-639000"  ...
	* Stopping node "old-k8s-version-639000"  ...
	* Stopping node "old-k8s-version-639000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:29:25.271443   19310 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:29:25.271640   19310 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:29:25.271645   19310 out.go:309] Setting ErrFile to fd 2...
	I0223 13:29:25.271649   19310 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:29:25.271759   19310 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:29:25.272075   19310 out.go:303] Setting JSON to false
	I0223 13:29:25.272219   19310 mustload.go:65] Loading cluster: old-k8s-version-639000
	I0223 13:29:25.272457   19310 config.go:182] Loaded profile config "old-k8s-version-639000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 13:29:25.272525   19310 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/old-k8s-version-639000/config.json ...
	I0223 13:29:25.272791   19310 mustload.go:65] Loading cluster: old-k8s-version-639000
	I0223 13:29:25.272889   19310 config.go:182] Loaded profile config "old-k8s-version-639000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 13:29:25.272924   19310 stop.go:39] StopHost: old-k8s-version-639000
	I0223 13:29:25.294787   19310 out.go:177] * Stopping node "old-k8s-version-639000"  ...
	I0223 13:29:25.336918   19310 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:25.393385   19310 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:29:25.393449   19310 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	W0223 13:29:25.393471   19310 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:25.393513   19310 retry.go:31] will retry after 701.05083ms: docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:26.094772   19310 stop.go:39] StopHost: old-k8s-version-639000
	I0223 13:29:26.116648   19310 out.go:177] * Stopping node "old-k8s-version-639000"  ...
	I0223 13:29:26.158856   19310 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:26.215636   19310 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:29:26.215675   19310 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	W0223 13:29:26.215689   19310 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:26.215703   19310 retry.go:31] will retry after 1.126488175s: docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:27.342669   19310 stop.go:39] StopHost: old-k8s-version-639000
	I0223 13:29:27.363641   19310 out.go:177] * Stopping node "old-k8s-version-639000"  ...
	I0223 13:29:27.385773   19310 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:27.441639   19310 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:29:27.441677   19310 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	W0223 13:29:27.441692   19310 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:27.441706   19310 retry.go:31] will retry after 2.417382361s: docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:29.860248   19310 stop.go:39] StopHost: old-k8s-version-639000
	I0223 13:29:29.882343   19310 out.go:177] * Stopping node "old-k8s-version-639000"  ...
	I0223 13:29:29.903262   19310 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:29.960579   19310 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:29:29.960625   19310 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	W0223 13:29:29.960643   19310 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:29.960662   19310 retry.go:31] will retry after 2.455198987s: docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:32.416132   19310 stop.go:39] StopHost: old-k8s-version-639000
	I0223 13:29:32.438194   19310 out.go:177] * Stopping node "old-k8s-version-639000"  ...
	I0223 13:29:32.480581   19310 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:32.535775   19310 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:29:32.535815   19310 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	W0223 13:29:32.535826   19310 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:32.535841   19310 retry.go:31] will retry after 7.11701375s: docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:39.654415   19310 stop.go:39] StopHost: old-k8s-version-639000
	I0223 13:29:39.676662   19310 out.go:177] * Stopping node "old-k8s-version-639000"  ...
	I0223 13:29:39.698574   19310 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:39.754602   19310 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:29:39.754648   19310 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	W0223 13:29:39.754666   19310 stop.go:163] stop host returned error: ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:39.775362   19310 out.go:177] 
	W0223 13:29:39.796667   19310 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-639000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect old-k8s-version-639000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:29:39.796694   19310 out.go:239] * 
	* 
	W0223 13:29:39.801392   19310 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:29:39.860525   19310 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p old-k8s-version-639000 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "5843ad7eb24ec35d942a1698eb07002ce3f498eae2a0c3e456e82e340ebd0642",
	        "Created": "2023-02-23T21:29:15.822881818Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (101.3857ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:29:40.063949   19350 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (14.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (100.582395ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:29:40.164763   19354 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-639000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "5843ad7eb24ec35d942a1698eb07002ce3f498eae2a0c3e456e82e340ebd0642",
	        "Created": "2023-02-23T21:29:15.822881818Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (101.290079ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:29:40.590994   19364 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (61.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-639000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-639000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 80 (1m1.355651474s)

                                                
                                                
-- stdout --
	* [old-k8s-version-639000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-639000 in cluster old-k8s-version-639000
	* Pulling base image ...
	* docker "old-k8s-version-639000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "old-k8s-version-639000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:29:40.635700   19368 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:29:40.635851   19368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:29:40.635856   19368 out.go:309] Setting ErrFile to fd 2...
	I0223 13:29:40.635860   19368 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:29:40.635969   19368 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:29:40.637364   19368 out.go:303] Setting JSON to false
	I0223 13:29:40.656001   19368 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3555,"bootTime":1677184225,"procs":388,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:29:40.656276   19368 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:29:40.677975   19368 out.go:177] * [old-k8s-version-639000] minikube v1.29.0 on Darwin 13.2
	I0223 13:29:40.720545   19368 notify.go:220] Checking for updates...
	I0223 13:29:40.742644   19368 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:29:40.763779   19368 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:29:40.785690   19368 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:29:40.806594   19368 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:29:40.827551   19368 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:29:40.848477   19368 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:29:40.869723   19368 config.go:182] Loaded profile config "old-k8s-version-639000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 13:29:40.891490   19368 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0223 13:29:40.912613   19368 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:29:40.975181   19368 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:29:40.975296   19368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:29:41.117213   19368 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:29:41.02545287 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:
{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadowe
dPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:29:41.139176   19368 out.go:177] * Using the docker driver based on existing profile
	I0223 13:29:41.160782   19368 start.go:296] selected driver: docker
	I0223 13:29:41.160818   19368 start.go:857] validating driver "docker" against &{Name:old-k8s-version-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-639000 Namespace:default APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:29:41.160972   19368 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:29:41.165090   19368 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:29:41.307889   19368 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:63 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:29:41.215991871 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:29:41.308027   19368 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:29:41.308044   19368 cni.go:84] Creating CNI manager for ""
	I0223 13:29:41.308057   19368 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 13:29:41.308066   19368 start_flags.go:319] config:
	{Name:old-k8s-version-639000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-639000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTyp
e:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:29:41.329901   19368 out.go:177] * Starting control plane node old-k8s-version-639000 in cluster old-k8s-version-639000
	I0223 13:29:41.351585   19368 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:29:41.372623   19368 out.go:177] * Pulling base image ...
	I0223 13:29:41.393820   19368 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 13:29:41.393898   19368 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:29:41.393921   19368 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 13:29:41.393932   19368 cache.go:57] Caching tarball of preloaded images
	I0223 13:29:41.394132   19368 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:29:41.394148   19368 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 13:29:41.394950   19368 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/old-k8s-version-639000/config.json ...
	I0223 13:29:41.451072   19368 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:29:41.451090   19368 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:29:41.451125   19368 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:29:41.451167   19368 start.go:364] acquiring machines lock for old-k8s-version-639000: {Name:mk9cf1c4e3e710c0d1f8a7c5776e720012b688ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:29:41.451254   19368 start.go:368] acquired machines lock for "old-k8s-version-639000" in 69.088µs
	I0223 13:29:41.451286   19368 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:29:41.451294   19368 fix.go:55] fixHost starting: 
	I0223 13:29:41.451554   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:41.505465   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:41.505527   19368 fix.go:103] recreateIfNeeded on old-k8s-version-639000: state= err=unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:41.505549   19368 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:29:41.527259   19368 out.go:177] * docker "old-k8s-version-639000" container is missing, will recreate.
	I0223 13:29:41.548995   19368 delete.go:124] DEMOLISHING old-k8s-version-639000 ...
	I0223 13:29:41.549117   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:41.605062   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:29:41.605105   19368 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:41.605119   19368 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:41.605535   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:41.659882   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:41.659929   19368 delete.go:82] Unable to get host status for old-k8s-version-639000, assuming it has already been deleted: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:41.660011   19368 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-639000
	W0223 13:29:41.713411   19368 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-639000 returned with exit code 1
	I0223 13:29:41.713447   19368 kic.go:367] could not find the container old-k8s-version-639000 to remove it. will try anyways
	I0223 13:29:41.713528   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:41.767592   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:29:41.767649   19368 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:41.767743   19368 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0"
	W0223 13:29:41.821607   19368 cli_runner.go:211] docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:29:41.821646   19368 oci.go:641] error shutdown old-k8s-version-639000: docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:42.822931   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:42.878932   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:42.878979   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:42.878987   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:42.879040   19368 retry.go:31] will retry after 255.128501ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:43.134722   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:43.190267   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:43.190312   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:43.190323   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:43.190341   19368 retry.go:31] will retry after 515.423277ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:43.706087   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:43.762790   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:43.762832   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:43.762839   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:43.762859   19368 retry.go:31] will retry after 1.428619218s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:45.192690   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:45.247161   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:45.247210   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:45.247218   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:45.247238   19368 retry.go:31] will retry after 2.230347592s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:47.478083   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:47.533802   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:47.533843   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:47.533851   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:47.533880   19368 retry.go:31] will retry after 1.939740026s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:49.474514   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:49.530178   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:49.530222   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:49.530233   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:49.530255   19368 retry.go:31] will retry after 3.827530737s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:53.359048   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:53.414318   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:53.414363   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:53.414373   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:53.414393   19368 retry.go:31] will retry after 6.064694541s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:59.480387   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:29:59.537953   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:29:59.537998   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:29:59.538006   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:29:59.538030   19368 oci.go:88] couldn't shut down old-k8s-version-639000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	 
	I0223 13:29:59.538119   19368 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-639000
	I0223 13:29:59.593913   19368 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-639000
	W0223 13:29:59.648540   19368 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-639000 returned with exit code 1
	I0223 13:29:59.648647   19368 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:29:59.703511   19368 cli_runner.go:164] Run: docker network rm old-k8s-version-639000
	W0223 13:29:59.806397   19368 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:29:59.806416   19368 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:30:00.808001   19368 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:30:00.830101   19368 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:30:00.830315   19368 start.go:159] libmachine.API.Create for "old-k8s-version-639000" (driver="docker")
	I0223 13:30:00.830364   19368 client.go:168] LocalClient.Create starting
	I0223 13:30:00.830570   19368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:30:00.830655   19368 main.go:141] libmachine: Decoding PEM data...
	I0223 13:30:00.830691   19368 main.go:141] libmachine: Parsing certificate...
	I0223 13:30:00.830841   19368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:30:00.830910   19368 main.go:141] libmachine: Decoding PEM data...
	I0223 13:30:00.830928   19368 main.go:141] libmachine: Parsing certificate...
	I0223 13:30:00.852296   19368 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:30:00.909225   19368 cli_runner.go:211] docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:30:00.909311   19368 network_create.go:281] running [docker network inspect old-k8s-version-639000] to gather additional debugging logs...
	I0223 13:30:00.909329   19368 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000
	W0223 13:30:00.964307   19368 cli_runner.go:211] docker network inspect old-k8s-version-639000 returned with exit code 1
	I0223 13:30:00.964334   19368 network_create.go:284] error running [docker network inspect old-k8s-version-639000]: docker network inspect old-k8s-version-639000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-639000
	I0223 13:30:00.964349   19368 network_create.go:286] output of [docker network inspect old-k8s-version-639000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-639000
	
	** /stderr **
	I0223 13:30:00.964437   19368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:30:01.021477   19368 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:30:01.021809   19368 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000fcc100}
	I0223 13:30:01.021822   19368 network_create.go:123] attempt to create docker network old-k8s-version-639000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:30:01.021891   19368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000
	W0223 13:30:01.077016   19368 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000 returned with exit code 1
	W0223 13:30:01.077047   19368 network_create.go:148] failed to create docker network old-k8s-version-639000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:30:01.077068   19368 network_create.go:115] failed to create docker network old-k8s-version-639000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:30:01.078477   19368 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:30:01.078797   19368 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010654b0}
	I0223 13:30:01.078808   19368 network_create.go:123] attempt to create docker network old-k8s-version-639000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:30:01.078884   19368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000
	I0223 13:30:01.170598   19368 network_create.go:107] docker network old-k8s-version-639000 192.168.67.0/24 created
	I0223 13:30:01.170640   19368 kic.go:117] calculated static IP "192.168.67.2" for the "old-k8s-version-639000" container
	I0223 13:30:01.170767   19368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:30:01.229424   19368 cli_runner.go:164] Run: docker volume create old-k8s-version-639000 --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:30:01.283847   19368 oci.go:103] Successfully created a docker volume old-k8s-version-639000
	I0223 13:30:01.283966   19368 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:30:01.418257   19368 cli_runner.go:211] docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:30:01.418299   19368 client.go:171] LocalClient.Create took 587.924925ms
	I0223 13:30:03.418876   19368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:30:03.419012   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:03.474610   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:03.474707   19368 retry.go:31] will retry after 251.873228ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:03.726960   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:03.782960   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:03.783063   19368 retry.go:31] will retry after 338.320601ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:04.123174   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:04.178702   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:04.178792   19368 retry.go:31] will retry after 517.822315ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:04.698508   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:04.756211   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:30:04.756309   19368 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:04.756323   19368 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:04.756383   19368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:30:04.756442   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:04.810447   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:04.810540   19368 retry.go:31] will retry after 172.4926ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:04.984335   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:05.039784   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:05.039872   19368 retry.go:31] will retry after 434.089547ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:05.474884   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:05.531117   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:05.531205   19368 retry.go:31] will retry after 630.459806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:06.163168   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:06.218992   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:30:06.219091   19368 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:06.219109   19368 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:06.219113   19368 start.go:128] duration metric: createHost completed in 5.411078077s
	I0223 13:30:06.219185   19368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:30:06.219234   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:06.274498   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:06.274590   19368 retry.go:31] will retry after 193.184601ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:06.469066   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:06.524489   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:06.524576   19368 retry.go:31] will retry after 253.80527ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:06.778872   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:06.835073   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:06.835161   19368 retry.go:31] will retry after 341.181001ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:07.176647   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:07.232768   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:07.232870   19368 retry.go:31] will retry after 904.645634ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:08.138363   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:08.194675   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:30:08.194764   19368 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:08.194777   19368 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:08.194832   19368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:30:08.194893   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:08.249303   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:08.249394   19368 retry.go:31] will retry after 170.774282ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:08.420551   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:08.476064   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:08.476149   19368 retry.go:31] will retry after 552.833212ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:09.029625   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:09.085001   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:09.085091   19368 retry.go:31] will retry after 291.291962ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:09.376934   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:09.432610   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:09.432699   19368 retry.go:31] will retry after 664.441522ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:10.099080   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:10.153131   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:30:10.153230   19368 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:10.153248   19368 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:10.153275   19368 fix.go:57] fixHost completed within 28.701914337s
	I0223 13:30:10.153281   19368 start.go:83] releasing machines lock for "old-k8s-version-639000", held for 28.70195353s
	W0223 13:30:10.153296   19368 start.go:691] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W0223 13:30:10.153421   19368 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:30:10.153429   19368 start.go:706] Will try again in 5 seconds ...
	I0223 13:30:15.155104   19368 start.go:364] acquiring machines lock for old-k8s-version-639000: {Name:mk9cf1c4e3e710c0d1f8a7c5776e720012b688ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:15.155294   19368 start.go:368] acquired machines lock for "old-k8s-version-639000" in 148.826µs
	I0223 13:30:15.155337   19368 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:30:15.155345   19368 fix.go:55] fixHost starting: 
	I0223 13:30:15.155763   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:15.211205   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:15.211256   19368 fix.go:103] recreateIfNeeded on old-k8s-version-639000: state= err=unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:15.211265   19368 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:30:15.254813   19368 out.go:177] * docker "old-k8s-version-639000" container is missing, will recreate.
	I0223 13:30:15.276720   19368 delete.go:124] DEMOLISHING old-k8s-version-639000 ...
	I0223 13:30:15.276943   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:15.331959   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:30:15.332003   19368 stop.go:75] unable to get state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:15.332026   19368 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:15.332389   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:15.387128   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:15.387183   19368 delete.go:82] Unable to get host status for old-k8s-version-639000, assuming it has already been deleted: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:15.387270   19368 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-639000
	W0223 13:30:15.442797   19368 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-639000 returned with exit code 1
	I0223 13:30:15.442825   19368 kic.go:367] could not find the container old-k8s-version-639000 to remove it. will try anyways
	I0223 13:30:15.442918   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:15.497924   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	W0223 13:30:15.497966   19368 oci.go:84] error getting container status, will try to delete anyways: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:15.498047   19368 cli_runner.go:164] Run: docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0"
	W0223 13:30:15.551979   19368 cli_runner.go:211] docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:30:15.552007   19368 oci.go:641] error shutdown old-k8s-version-639000: docker exec --privileged -t old-k8s-version-639000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:16.552985   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:16.608641   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:16.608685   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:16.608693   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:30:16.608715   19368 retry.go:31] will retry after 283.05778ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:16.893075   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:16.948744   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:16.948796   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:16.948805   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:30:16.948826   19368 retry.go:31] will retry after 991.318231ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:17.940674   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:17.994733   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:17.994783   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:17.994791   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:30:17.994817   19368 retry.go:31] will retry after 890.939518ms: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:18.887262   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:18.941496   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:18.941549   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:18.941557   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:30:18.941577   19368 retry.go:31] will retry after 1.825602687s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:20.769093   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:20.826873   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:20.826928   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:20.826936   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:30:20.826956   19368 retry.go:31] will retry after 3.668890659s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:24.496944   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:24.553528   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:24.553572   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:24.553579   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:30:24.553609   19368 retry.go:31] will retry after 1.922979108s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:26.479007   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:26.535462   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:26.535507   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:26.535523   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:30:26.535542   19368 retry.go:31] will retry after 5.740761438s: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:32.278232   19368 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:32.334727   19368 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:32.334771   19368 oci.go:653] temporary error verifying shutdown: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:32.334780   19368 oci.go:655] temporary error: container old-k8s-version-639000 status is  but expect it to be exited
	I0223 13:30:32.334807   19368 oci.go:88] couldn't shut down old-k8s-version-639000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	 
	I0223 13:30:32.334881   19368 cli_runner.go:164] Run: docker rm -f -v old-k8s-version-639000
	I0223 13:30:32.390263   19368 cli_runner.go:164] Run: docker container inspect -f {{.Id}} old-k8s-version-639000
	W0223 13:30:32.445265   19368 cli_runner.go:211] docker container inspect -f {{.Id}} old-k8s-version-639000 returned with exit code 1
	I0223 13:30:32.445397   19368 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:30:32.501421   19368 cli_runner.go:164] Run: docker network rm old-k8s-version-639000
	W0223 13:30:32.605179   19368 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:30:32.605198   19368 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:30:33.606822   19368 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:30:33.650578   19368 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:30:33.650750   19368 start.go:159] libmachine.API.Create for "old-k8s-version-639000" (driver="docker")
	I0223 13:30:33.650786   19368 client.go:168] LocalClient.Create starting
	I0223 13:30:33.650953   19368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:30:33.651042   19368 main.go:141] libmachine: Decoding PEM data...
	I0223 13:30:33.651071   19368 main.go:141] libmachine: Parsing certificate...
	I0223 13:30:33.651174   19368 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:30:33.651237   19368 main.go:141] libmachine: Decoding PEM data...
	I0223 13:30:33.651263   19368 main.go:141] libmachine: Parsing certificate...
	I0223 13:30:33.651981   19368 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:30:33.708092   19368 cli_runner.go:211] docker network inspect old-k8s-version-639000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:30:33.708188   19368 network_create.go:281] running [docker network inspect old-k8s-version-639000] to gather additional debugging logs...
	I0223 13:30:33.708207   19368 cli_runner.go:164] Run: docker network inspect old-k8s-version-639000
	W0223 13:30:33.761907   19368 cli_runner.go:211] docker network inspect old-k8s-version-639000 returned with exit code 1
	I0223 13:30:33.761931   19368 network_create.go:284] error running [docker network inspect old-k8s-version-639000]: docker network inspect old-k8s-version-639000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-639000
	I0223 13:30:33.761943   19368 network_create.go:286] output of [docker network inspect old-k8s-version-639000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-639000
	
	** /stderr **
	I0223 13:30:33.762037   19368 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:30:33.819710   19368 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:30:33.820988   19368 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:30:33.822516   19368 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:30:33.822818   19368 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016ec250}
	I0223 13:30:33.822828   19368 network_create.go:123] attempt to create docker network old-k8s-version-639000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:30:33.822900   19368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000
	W0223 13:30:33.877806   19368 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000 returned with exit code 1
	W0223 13:30:33.877835   19368 network_create.go:148] failed to create docker network old-k8s-version-639000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:30:33.877849   19368 network_create.go:115] failed to create docker network old-k8s-version-639000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:30:33.879193   19368 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:30:33.879521   19368 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00171f760}
	I0223 13:30:33.879531   19368 network_create.go:123] attempt to create docker network old-k8s-version-639000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:30:33.879599   19368 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-639000 old-k8s-version-639000
	I0223 13:30:33.967062   19368 network_create.go:107] docker network old-k8s-version-639000 192.168.85.0/24 created
	I0223 13:30:33.967092   19368 kic.go:117] calculated static IP "192.168.85.2" for the "old-k8s-version-639000" container
	I0223 13:30:33.967205   19368 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:30:34.024872   19368 cli_runner.go:164] Run: docker volume create old-k8s-version-639000 --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:30:34.079798   19368 oci.go:103] Successfully created a docker volume old-k8s-version-639000
	I0223 13:30:34.079924   19368 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:30:34.211000   19368 cli_runner.go:211] docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:30:34.211047   19368 client.go:171] LocalClient.Create took 560.253729ms
	I0223 13:30:36.212519   19368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:30:36.212656   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:36.268603   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:36.268691   19368 retry.go:31] will retry after 240.863043ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:36.510427   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:36.565706   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:36.565795   19368 retry.go:31] will retry after 505.983776ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:37.072628   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:37.128648   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:37.128737   19368 retry.go:31] will retry after 460.449574ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:37.590659   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:37.647683   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:30:37.647789   19368 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:37.647804   19368 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:37.647858   19368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:30:37.647921   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:37.701429   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:37.701520   19368 retry.go:31] will retry after 157.427421ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:37.860785   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:37.917056   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:37.917149   19368 retry.go:31] will retry after 247.582603ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:38.166337   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:38.223170   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:38.223263   19368 retry.go:31] will retry after 557.613475ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:38.782702   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:38.837566   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:30:38.837672   19368 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:38.837685   19368 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:38.837692   19368 start.go:128] duration metric: createHost completed in 5.230831559s
	I0223 13:30:38.837764   19368 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:30:38.837813   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:38.891404   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:38.891483   19368 retry.go:31] will retry after 336.663709ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:39.230405   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:39.285657   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:39.285750   19368 retry.go:31] will retry after 513.861563ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:39.800601   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:39.856494   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:39.856597   19368 retry.go:31] will retry after 404.668191ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:40.262613   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:40.318570   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:30:40.318662   19368 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:40.318677   19368 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:40.318744   19368 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:30:40.318795   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:40.373843   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:40.373933   19368 retry.go:31] will retry after 350.689102ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:40.726261   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:40.781763   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:40.781841   19368 retry.go:31] will retry after 545.735382ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:41.329085   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:41.384970   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	I0223 13:30:41.385067   19368 retry.go:31] will retry after 332.606744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:41.718867   19368 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000
	W0223 13:30:41.774996   19368 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000 returned with exit code 1
	W0223 13:30:41.775086   19368 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:41.775102   19368 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "old-k8s-version-639000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-639000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	I0223 13:30:41.775109   19368 fix.go:57] fixHost completed within 26.619702538s
	I0223 13:30:41.775117   19368 start.go:83] releasing machines lock for "old-k8s-version-639000", held for 26.619747317s
	W0223 13:30:41.775263   19368 out.go:239] * Failed to start docker container. Running "minikube delete -p old-k8s-version-639000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p old-k8s-version-639000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:30:41.817872   19368 out.go:177] 
	W0223 13:30:41.840167   19368 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for old-k8s-version-639000 container: docker run --rm --name old-k8s-version-639000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-639000 --entrypoint /usr/bin/test -v old-k8s-version-639000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:30:41.840195   19368 out.go:239] * 
	* 
	W0223 13:30:41.841179   19368 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:30:41.903886   19368 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-639000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "e93a93d45cfe0f6b5428eedce7ea6bd6a7e4e9c55baf116ff18440ef2878c7be",
	        "Created": "2023-02-23T21:30:33.930338804Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (101.814855ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:30:42.188043   19721 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (61.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-639000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "e93a93d45cfe0f6b5428eedce7ea6bd6a7e4e9c55baf116ff18440ef2878c7be",
	        "Created": "2023-02-23T21:30:33.930338804Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (101.843114ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:30:42.348551   19727 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "old-k8s-version-639000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-639000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-639000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (34.884912ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-639000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-639000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "e93a93d45cfe0f6b5428eedce7ea6bd6a7e4e9c55baf116ff18440ef2878c7be",
	        "Created": "2023-02-23T21:30:33.930338804Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (100.001784ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:30:42.541721   19734 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p old-k8s-version-639000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p old-k8s-version-639000 "sudo crictl images -o json": exit status 80 (193.709848ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p old-k8s-version-639000 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "e93a93d45cfe0f6b5428eedce7ea6bd6a7e4e9c55baf116ff18440ef2878c7be",
	        "Created": "2023-02-23T21:30:33.930338804Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (100.768696ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:30:42.896582   19746 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-639000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p old-k8s-version-639000 --alsologtostderr -v=1: exit status 80 (190.379342ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:30:42.941488   19750 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:30:42.941674   19750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:30:42.941679   19750 out.go:309] Setting ErrFile to fd 2...
	I0223 13:30:42.941684   19750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:30:42.941784   19750 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:30:42.942096   19750 out.go:303] Setting JSON to false
	I0223 13:30:42.942116   19750 mustload.go:65] Loading cluster: old-k8s-version-639000
	I0223 13:30:42.942374   19750 config.go:182] Loaded profile config "old-k8s-version-639000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I0223 13:30:42.942745   19750 cli_runner.go:164] Run: docker container inspect old-k8s-version-639000 --format={{.State.Status}}
	W0223 13:30:42.998026   19750 cli_runner.go:211] docker container inspect old-k8s-version-639000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:43.019805   19750 out.go:177] 
	W0223 13:30:43.040852   19750 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	X Exiting due to GUEST_STATUS: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000
	
	W0223 13:30:43.040872   19750 out.go:239] * 
	* 
	W0223 13:30:43.044516   19750 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:30:43.065637   19750 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p old-k8s-version-639000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "e93a93d45cfe0f6b5428eedce7ea6bd6a7e4e9c55baf116ff18440ef2878c7be",
	        "Created": "2023-02-23T21:30:33.930338804Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (100.463938ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:30:43.247028   19756 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-639000
helpers_test.go:235: (dbg) docker inspect old-k8s-version-639000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "old-k8s-version-639000",
	        "Id": "e93a93d45cfe0f6b5428eedce7ea6bd6a7e4e9c55baf116ff18440ef2878c7be",
	        "Created": "2023-02-23T21:30:33.930338804Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "old-k8s-version-639000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-639000 -n old-k8s-version-639000: exit status 7 (99.934025ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:30:43.408339   19763 status.go:249] status error: host: state: unknown state "old-k8s-version-639000": docker container inspect old-k8s-version-639000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: old-k8s-version-639000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "old-k8s-version-639000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (0.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (37.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-317000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0223 13:30:52.951969    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 13:31:09.923652    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-317000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: exit status 80 (37.381873567s)

                                                
                                                
-- stdout --
	* [no-preload-317000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node no-preload-317000 in cluster no-preload-317000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-317000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:30:44.662819   19801 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:30:44.662984   19801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:30:44.662989   19801 out.go:309] Setting ErrFile to fd 2...
	I0223 13:30:44.662993   19801 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:30:44.663102   19801 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:30:44.664424   19801 out.go:303] Setting JSON to false
	I0223 13:30:44.682852   19801 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3619,"bootTime":1677184225,"procs":395,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:30:44.682925   19801 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:30:44.705018   19801 out.go:177] * [no-preload-317000] minikube v1.29.0 on Darwin 13.2
	I0223 13:30:44.747233   19801 notify.go:220] Checking for updates...
	I0223 13:30:44.747270   19801 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:30:44.769185   19801 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:30:44.791233   19801 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:30:44.813252   19801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:30:44.834977   19801 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:30:44.856142   19801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:30:44.877828   19801 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:30:44.877974   19801 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:30:44.878101   19801 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:30:44.878163   19801 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:30:44.941782   19801 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:30:44.941926   19801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:30:45.085985   19801 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:30:44.992915045 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:30:45.129856   19801 out.go:177] * Using the docker driver based on user configuration
	I0223 13:30:45.151840   19801 start.go:296] selected driver: docker
	I0223 13:30:45.151866   19801 start.go:857] validating driver "docker" against <nil>
	I0223 13:30:45.151885   19801 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:30:45.155732   19801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:30:45.298068   19801 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:30:45.205595961 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:30:45.298182   19801 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:30:45.298368   19801 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:30:45.319981   19801 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:30:45.341125   19801 cni.go:84] Creating CNI manager for ""
	I0223 13:30:45.341164   19801 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:30:45.341184   19801 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 13:30:45.341207   19801 start_flags.go:319] config:
	{Name:no-preload-317000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:no-preload-317000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:30:45.384926   19801 out.go:177] * Starting control plane node no-preload-317000 in cluster no-preload-317000
	I0223 13:30:45.405819   19801 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:30:45.426746   19801 out.go:177] * Pulling base image ...
	I0223 13:30:45.468878   19801 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:30:45.468886   19801 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:30:45.469026   19801 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/no-preload-317000/config.json ...
	I0223 13:30:45.469064   19801 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/no-preload-317000/config.json: {Name:mk09f3679a08024f707b67d4af9aafa5ea41dace Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:30:45.469116   19801 cache.go:107] acquiring lock: {Name:mk6fbe3d88148a778f3bf80c9cdb08cb932d0ddb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.469117   19801 cache.go:107] acquiring lock: {Name:mk638c39843f14049dbf512c0a5b834568a91030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.469159   19801 cache.go:107] acquiring lock: {Name:mk1823854fe2bada1dc0bd63300471cde3895c84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.469162   19801 cache.go:107] acquiring lock: {Name:mk4f9e75e998d297f4001f728279e4bcf855bb51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.469187   19801 cache.go:107] acquiring lock: {Name:mk621bc0e32f154e15df1e491d8291b27bd0b1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.469200   19801 cache.go:107] acquiring lock: {Name:mk5c0168fe38dd3e5ec674df1b7a69fd6d5b0e0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.470352   19801 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0223 13:30:45.470459   19801 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 1.301048ms
	I0223 13:30:45.470509   19801 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0223 13:30:45.470522   19801 cache.go:107] acquiring lock: {Name:mk9cd4d3edd7f650f6a7f63cc9c22405dac0be2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.470523   19801 cache.go:107] acquiring lock: {Name:mk89b1fa49307a079ae04b05999d81e7952611d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.471673   19801 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0223 13:30:45.471673   19801 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.26.1
	I0223 13:30:45.471690   19801 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 13:30:45.471717   19801 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.26.1
	I0223 13:30:45.471666   19801 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.26.1
	I0223 13:30:45.471797   19801 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.6-0
	I0223 13:30:45.471895   19801 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.9.3
	I0223 13:30:45.479217   19801 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.26.1: Error: No such image: registry.k8s.io/kube-scheduler:v1.26.1
	I0223 13:30:45.479590   19801 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.9.3: Error: No such image: registry.k8s.io/coredns/coredns:v1.9.3
	I0223 13:30:45.480374   19801 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.6-0: Error: No such image: registry.k8s.io/etcd:3.5.6-0
	I0223 13:30:45.480972   19801 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.26.1: Error: No such image: registry.k8s.io/kube-controller-manager:v1.26.1
	I0223 13:30:45.481999   19801 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.26.1: Error: No such image: registry.k8s.io/kube-proxy:v1.26.1
	I0223 13:30:45.482178   19801 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.26.1: Error: No such image: registry.k8s.io/kube-apiserver:v1.26.1
	I0223 13:30:45.482893   19801 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error: No such image: registry.k8s.io/pause:3.9
	I0223 13:30:45.531478   19801 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:30:45.531494   19801 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:30:45.531514   19801 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:30:45.531550   19801 start.go:364] acquiring machines lock for no-preload-317000: {Name:mkb232fc445eaf810e5edc9a0f7dd58d965890d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:45.531689   19801 start.go:368] acquired machines lock for "no-preload-317000" in 126.988µs
	I0223 13:30:45.531722   19801 start.go:93] Provisioning new machine with config: &{Name:no-preload-317000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:no-preload-317000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:30:45.531805   19801 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:30:45.576737   19801 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:30:45.576918   19801 start.go:159] libmachine.API.Create for "no-preload-317000" (driver="docker")
	I0223 13:30:45.576944   19801 client.go:168] LocalClient.Create starting
	I0223 13:30:45.577035   19801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:30:45.577080   19801 main.go:141] libmachine: Decoding PEM data...
	I0223 13:30:45.577097   19801 main.go:141] libmachine: Parsing certificate...
	I0223 13:30:45.577157   19801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:30:45.577192   19801 main.go:141] libmachine: Decoding PEM data...
	I0223 13:30:45.577201   19801 main.go:141] libmachine: Parsing certificate...
	I0223 13:30:45.577757   19801 cli_runner.go:164] Run: docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:30:45.634566   19801 cli_runner.go:211] docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:30:45.634658   19801 network_create.go:281] running [docker network inspect no-preload-317000] to gather additional debugging logs...
	I0223 13:30:45.634677   19801 cli_runner.go:164] Run: docker network inspect no-preload-317000
	W0223 13:30:45.689362   19801 cli_runner.go:211] docker network inspect no-preload-317000 returned with exit code 1
	I0223 13:30:45.689390   19801 network_create.go:284] error running [docker network inspect no-preload-317000]: docker network inspect no-preload-317000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-317000
	I0223 13:30:45.689401   19801 network_create.go:286] output of [docker network inspect no-preload-317000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-317000
	
	** /stderr **
	I0223 13:30:45.689477   19801 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:30:45.748626   19801 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:30:45.748976   19801 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d76c90}
	I0223 13:30:45.748990   19801 network_create.go:123] attempt to create docker network no-preload-317000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:30:45.749085   19801 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000
	W0223 13:30:45.805579   19801 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000 returned with exit code 1
	W0223 13:30:45.805615   19801 network_create.go:148] failed to create docker network no-preload-317000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:30:45.805632   19801 network_create.go:115] failed to create docker network no-preload-317000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:30:45.807190   19801 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:30:45.807480   19801 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0012a78c0}
	I0223 13:30:45.807492   19801 network_create.go:123] attempt to create docker network no-preload-317000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:30:45.807544   19801 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000
	I0223 13:30:45.899499   19801 network_create.go:107] docker network no-preload-317000 192.168.67.0/24 created
	I0223 13:30:45.899533   19801 kic.go:117] calculated static IP "192.168.67.2" for the "no-preload-317000" container
	I0223 13:30:45.899637   19801 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:30:45.958124   19801 cli_runner.go:164] Run: docker volume create no-preload-317000 --label name.minikube.sigs.k8s.io=no-preload-317000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:30:46.013976   19801 oci.go:103] Successfully created a docker volume no-preload-317000
	I0223 13:30:46.014089   19801 cli_runner.go:164] Run: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:30:46.241806   19801 cli_runner.go:211] docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:30:46.241857   19801 client.go:171] LocalClient.Create took 664.90542ms
	I0223 13:30:46.919893   19801 cache.go:162] opening:  /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3
	I0223 13:30:46.938153   19801 cache.go:162] opening:  /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1
	I0223 13:30:47.033286   19801 cache.go:162] opening:  /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0
	I0223 13:30:47.425372   19801 cache.go:162] opening:  /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1
	I0223 13:30:47.588143   19801 cache.go:162] opening:  /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1
	I0223 13:30:47.756731   19801 cache.go:162] opening:  /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1
	I0223 13:30:47.925969   19801 cache.go:162] opening:  /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0223 13:30:48.036102   19801 cache.go:157] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0223 13:30:48.036117   19801 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 2.562941194s
	I0223 13:30:48.036125   19801 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0223 13:30:48.246661   19801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:30:48.246737   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:30:48.301181   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:30:48.301295   19801 retry.go:31] will retry after 361.891449ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:48.664335   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:30:48.719919   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:30:48.720004   19801 retry.go:31] will retry after 492.885338ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:49.061130   19801 cache.go:157] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 exists
	I0223 13:30:49.061145   19801 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.26.1" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1" took 3.585212903s
	I0223 13:30:49.061156   19801 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.26.1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 succeeded
	I0223 13:30:49.214408   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	I0223 13:30:49.236276   19801 cache.go:157] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0223 13:30:49.236306   19801 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 3.758689255s
	I0223 13:30:49.236334   19801 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	W0223 13:30:49.269185   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:30:49.269270   19801 retry.go:31] will retry after 394.451732ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:49.432797   19801 cache.go:157] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 exists
	I0223 13:30:49.432815   19801 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.26.1" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1" took 3.955894237s
	I0223 13:30:49.432833   19801 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.26.1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 succeeded
	I0223 13:30:49.664944   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:30:49.720348   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:30:49.720440   19801 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:30:49.720459   19801 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:49.720527   19801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:30:49.720576   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:30:49.774591   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:30:49.774672   19801 retry.go:31] will retry after 254.844009ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:49.802370   19801 cache.go:157] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 exists
	I0223 13:30:49.802385   19801 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.26.1" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1" took 4.324515163s
	I0223 13:30:49.802395   19801 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.26.1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 succeeded
	I0223 13:30:49.818331   19801 cache.go:157] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 exists
	I0223 13:30:49.818351   19801 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.26.1" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1" took 4.340392169s
	I0223 13:30:49.818360   19801 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.26.1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 succeeded
	I0223 13:30:50.031028   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:30:50.085080   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:30:50.085175   19801 retry.go:31] will retry after 433.642898ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:50.520108   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:30:50.574992   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:30:50.575083   19801 retry.go:31] will retry after 514.27997ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:51.090805   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:30:51.144539   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:30:51.144629   19801 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:30:51.144645   19801 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:51.144652   19801 start.go:128] duration metric: createHost completed in 5.600861261s
	I0223 13:30:51.144659   19801 start.go:83] releasing machines lock for "no-preload-317000", held for 5.600981643s
	W0223 13:30:51.144674   19801 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:30:51.145106   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:30:51.199277   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:51.199324   19801 delete.go:82] Unable to get host status for no-preload-317000, assuming it has already been deleted: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	W0223 13:30:51.199457   19801 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:30:51.199467   19801 start.go:706] Will try again in 5 seconds ...
	I0223 13:30:56.209649   19801 start.go:364] acquiring machines lock for no-preload-317000: {Name:mkb232fc445eaf810e5edc9a0f7dd58d965890d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:30:56.209801   19801 start.go:368] acquired machines lock for "no-preload-317000" in 118.266µs
	I0223 13:30:56.209841   19801 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:30:56.209872   19801 fix.go:55] fixHost starting: 
	I0223 13:30:56.210294   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:30:56.266377   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:56.266505   19801 fix.go:103] recreateIfNeeded on no-preload-317000: state= err=unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:56.266524   19801 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:30:56.292804   19801 out.go:177] * docker "no-preload-317000" container is missing, will recreate.
	I0223 13:30:56.314089   19801 delete.go:124] DEMOLISHING no-preload-317000 ...
	I0223 13:30:56.314282   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:30:56.369862   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:30:56.369905   19801 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:56.369921   19801 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:56.370314   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:30:56.424170   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:56.424221   19801 delete.go:82] Unable to get host status for no-preload-317000, assuming it has already been deleted: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:56.424303   19801 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-317000
	W0223 13:30:56.479067   19801 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-317000 returned with exit code 1
	I0223 13:30:56.479098   19801 kic.go:367] could not find the container no-preload-317000 to remove it. will try anyways
	I0223 13:30:56.479168   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:30:56.533062   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:30:56.533104   19801 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:56.533197   19801 cli_runner.go:164] Run: docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0"
	W0223 13:30:56.588030   19801 cli_runner.go:211] docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:30:56.588060   19801 oci.go:641] error shutdown no-preload-317000: docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:57.590333   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:30:57.645025   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:57.645072   19801 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:57.645083   19801 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:30:57.645106   19801 retry.go:31] will retry after 536.762518ms: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:58.182897   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:30:58.236914   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:58.236953   19801 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:58.236963   19801 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:30:58.236980   19801 retry.go:31] will retry after 785.933893ms: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:59.024166   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	I0223 13:30:59.031017   19801 cache.go:157] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 exists
	I0223 13:30:59.031040   19801 cache.go:96] cache image "registry.k8s.io/etcd:3.5.6-0" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0" took 13.534530864s
	I0223 13:30:59.031051   19801 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.6-0 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 succeeded
	I0223 13:30:59.031067   19801 cache.go:87] Successfully saved all images to host disk.
	W0223 13:30:59.077172   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:30:59.077225   19801 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:30:59.077235   19801 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:30:59.077256   19801 retry.go:31] will retry after 1.446626915s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:00.526848   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:00.588765   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:00.588811   19801 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:00.588820   19801 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:00.588842   19801 retry.go:31] will retry after 1.030413746s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:01.622839   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:01.681207   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:01.681246   19801 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:01.681262   19801 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:01.681282   19801 retry.go:31] will retry after 1.477081049s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:03.162408   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:03.219600   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:03.219640   19801 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:03.219649   19801 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:03.219669   19801 retry.go:31] will retry after 4.827890924s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:08.053034   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:08.111005   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:08.111048   19801 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:08.111057   19801 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:08.111078   19801 retry.go:31] will retry after 3.654902673s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:11.769745   19801 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:11.830017   19801 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:11.830060   19801 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:11.830069   19801 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:11.830093   19801 oci.go:88] couldn't shut down no-preload-317000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	 
	I0223 13:31:11.830188   19801 cli_runner.go:164] Run: docker rm -f -v no-preload-317000
	I0223 13:31:11.885988   19801 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-317000
	W0223 13:31:11.940122   19801 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-317000 returned with exit code 1
	I0223 13:31:11.940241   19801 cli_runner.go:164] Run: docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:31:11.995937   19801 cli_runner.go:164] Run: docker network rm no-preload-317000
	W0223 13:31:12.140161   19801 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:31:12.140181   19801 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:31:13.141466   19801 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:31:13.163434   19801 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:31:13.163616   19801 start.go:159] libmachine.API.Create for "no-preload-317000" (driver="docker")
	I0223 13:31:13.163673   19801 client.go:168] LocalClient.Create starting
	I0223 13:31:13.163872   19801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:31:13.163960   19801 main.go:141] libmachine: Decoding PEM data...
	I0223 13:31:13.163986   19801 main.go:141] libmachine: Parsing certificate...
	I0223 13:31:13.164078   19801 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:31:13.164145   19801 main.go:141] libmachine: Decoding PEM data...
	I0223 13:31:13.164162   19801 main.go:141] libmachine: Parsing certificate...
	I0223 13:31:13.164851   19801 cli_runner.go:164] Run: docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:31:13.223324   19801 cli_runner.go:211] docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:31:13.223416   19801 network_create.go:281] running [docker network inspect no-preload-317000] to gather additional debugging logs...
	I0223 13:31:13.223434   19801 cli_runner.go:164] Run: docker network inspect no-preload-317000
	W0223 13:31:13.277155   19801 cli_runner.go:211] docker network inspect no-preload-317000 returned with exit code 1
	I0223 13:31:13.277182   19801 network_create.go:284] error running [docker network inspect no-preload-317000]: docker network inspect no-preload-317000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-317000
	I0223 13:31:13.277208   19801 network_create.go:286] output of [docker network inspect no-preload-317000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-317000
	
	** /stderr **
	I0223 13:31:13.277299   19801 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:31:13.334651   19801 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:31:13.335927   19801 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:31:13.337475   19801 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:31:13.337805   19801 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001833040}
	I0223 13:31:13.337818   19801 network_create.go:123] attempt to create docker network no-preload-317000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:31:13.337889   19801 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000
	W0223 13:31:13.393095   19801 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000 returned with exit code 1
	W0223 13:31:13.393125   19801 network_create.go:148] failed to create docker network no-preload-317000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:31:13.393140   19801 network_create.go:115] failed to create docker network no-preload-317000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:31:13.394728   19801 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:31:13.395070   19801 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00172e020}
	I0223 13:31:13.395088   19801 network_create.go:123] attempt to create docker network no-preload-317000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:31:13.395152   19801 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000
	I0223 13:31:13.481410   19801 network_create.go:107] docker network no-preload-317000 192.168.85.0/24 created
	I0223 13:31:13.481444   19801 kic.go:117] calculated static IP "192.168.85.2" for the "no-preload-317000" container
	I0223 13:31:13.481585   19801 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:31:13.538715   19801 cli_runner.go:164] Run: docker volume create no-preload-317000 --label name.minikube.sigs.k8s.io=no-preload-317000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:31:13.593426   19801 oci.go:103] Successfully created a docker volume no-preload-317000
	I0223 13:31:13.593542   19801 cli_runner.go:164] Run: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:31:13.734550   19801 cli_runner.go:211] docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:31:13.734588   19801 client.go:171] LocalClient.Create took 570.588144ms
	I0223 13:31:15.736557   19801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:31:15.736680   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:15.794306   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:15.794394   19801 retry.go:31] will retry after 325.981901ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:16.123025   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:16.183848   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:16.183934   19801 retry.go:31] will retry after 442.04198ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:16.626721   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:16.685525   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:16.685623   19801 retry.go:31] will retry after 449.897488ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:17.136689   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:17.195014   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:31:17.195122   19801 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:31:17.195140   19801 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:17.195210   19801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:31:17.195265   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:17.249639   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:17.249735   19801 retry.go:31] will retry after 232.323649ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:17.484488   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:17.543505   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:17.543595   19801 retry.go:31] will retry after 391.892667ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:17.938109   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:17.996839   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:17.996925   19801 retry.go:31] will retry after 326.301882ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:18.325751   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:18.385514   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:18.385611   19801 retry.go:31] will retry after 493.128993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:18.879802   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:18.937686   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:31:18.937793   19801 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:31:18.937815   19801 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:18.937821   19801 start.go:128] duration metric: createHost completed in 5.793606332s
	I0223 13:31:18.937898   19801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:31:18.937950   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:18.991618   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:18.991702   19801 retry.go:31] will retry after 190.687433ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:19.184112   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:19.241997   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:19.242079   19801 retry.go:31] will retry after 528.835111ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:19.773509   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:19.830762   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:19.830844   19801 retry.go:31] will retry after 759.875359ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:20.593344   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:20.653180   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:31:20.653272   19801 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:31:20.653285   19801 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:20.653353   19801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:31:20.653416   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:20.708605   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:20.708689   19801 retry.go:31] will retry after 342.771291ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:21.053126   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:21.108822   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:21.108906   19801 retry.go:31] will retry after 280.460238ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:21.389854   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:21.450170   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:31:21.450253   19801 retry.go:31] will retry after 357.268388ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:21.809775   19801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:31:21.869846   19801 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:31:21.869944   19801 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:31:21.869965   19801 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:21.869969   19801 fix.go:57] fixHost completed within 25.639088534s
	I0223 13:31:21.869976   19801 start.go:83] releasing machines lock for "no-preload-317000", held for 25.639136651s
	W0223 13:31:21.870149   19801 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-317000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p no-preload-317000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:31:21.913549   19801 out.go:177] 
	W0223 13:31:21.934828   19801 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:31:21.934859   19801 out.go:239] * 
	* 
	W0223 13:31:21.936115   19801 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:31:21.998604   19801 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p no-preload-317000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "8a0c87970bc76bf12ec3ac564174990100e8e4a34380b4a89856e096fa4ac80d",
	        "Created": "2023-02-23T21:31:13.430219965Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (99.593796ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:31:22.234684   20114 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (37.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-317000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context no-preload-317000 create -f testdata/busybox.yaml: exit status 1 (34.353133ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-317000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context no-preload-317000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "8a0c87970bc76bf12ec3ac564174990100e8e4a34380b4a89856e096fa4ac80d",
	        "Created": "2023-02-23T21:31:13.430219965Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (100.132146ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:31:22.427071   20121 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "8a0c87970bc76bf12ec3ac564174990100e8e4a34380b4a89856e096fa4ac80d",
	        "Created": "2023-02-23T21:31:13.430219965Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (101.139467ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:31:22.586160   20127 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-317000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-317000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context no-preload-317000 describe deploy/metrics-server -n kube-system: exit status 1 (35.060257ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-317000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context no-preload-317000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "8a0c87970bc76bf12ec3ac564174990100e8e4a34380b4a89856e096fa4ac80d",
	        "Created": "2023-02-23T21:31:13.430219965Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (101.191551ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:31:23.002756   20140 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (15.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-317000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p no-preload-317000 --alsologtostderr -v=3: exit status 82 (14.94949453s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-317000"  ...
	* Stopping node "no-preload-317000"  ...
	* Stopping node "no-preload-317000"  ...
	* Stopping node "no-preload-317000"  ...
	* Stopping node "no-preload-317000"  ...
	* Stopping node "no-preload-317000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:31:23.047791   20144 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:31:23.047957   20144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:31:23.047962   20144 out.go:309] Setting ErrFile to fd 2...
	I0223 13:31:23.047966   20144 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:31:23.048074   20144 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:31:23.048378   20144 out.go:303] Setting JSON to false
	I0223 13:31:23.048512   20144 mustload.go:65] Loading cluster: no-preload-317000
	I0223 13:31:23.048759   20144 config.go:182] Loaded profile config "no-preload-317000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:31:23.048821   20144 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/no-preload-317000/config.json ...
	I0223 13:31:23.049104   20144 mustload.go:65] Loading cluster: no-preload-317000
	I0223 13:31:23.049196   20144 config.go:182] Loaded profile config "no-preload-317000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:31:23.049229   20144 stop.go:39] StopHost: no-preload-317000
	I0223 13:31:23.071645   20144 out.go:177] * Stopping node "no-preload-317000"  ...
	I0223 13:31:23.114343   20144 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:23.169397   20144 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:31:23.169474   20144 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	W0223 13:31:23.169495   20144 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:23.169537   20144 retry.go:31] will retry after 1.391767473s: docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:24.563822   20144 stop.go:39] StopHost: no-preload-317000
	I0223 13:31:24.587281   20144 out.go:177] * Stopping node "no-preload-317000"  ...
	I0223 13:31:24.608999   20144 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:24.668007   20144 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:31:24.668056   20144 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	W0223 13:31:24.668070   20144 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:24.668085   20144 retry.go:31] will retry after 1.397989455s: docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:26.068536   20144 stop.go:39] StopHost: no-preload-317000
	I0223 13:31:26.090570   20144 out.go:177] * Stopping node "no-preload-317000"  ...
	I0223 13:31:26.132631   20144 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:26.193975   20144 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:31:26.194012   20144 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	W0223 13:31:26.194024   20144 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:26.194038   20144 retry.go:31] will retry after 1.793952657s: docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:27.989731   20144 stop.go:39] StopHost: no-preload-317000
	I0223 13:31:28.012025   20144 out.go:177] * Stopping node "no-preload-317000"  ...
	I0223 13:31:28.033989   20144 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:28.093447   20144 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:31:28.093496   20144 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	W0223 13:31:28.093512   20144 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:28.093530   20144 retry.go:31] will retry after 3.057560205s: docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:31.152874   20144 stop.go:39] StopHost: no-preload-317000
	I0223 13:31:31.175227   20144 out.go:177] * Stopping node "no-preload-317000"  ...
	I0223 13:31:31.216943   20144 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:31.275946   20144 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:31:31.275989   20144 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	W0223 13:31:31.276001   20144 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:31.276017   20144 retry.go:31] will retry after 6.425833145s: docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:37.703661   20144 stop.go:39] StopHost: no-preload-317000
	I0223 13:31:37.725606   20144 out.go:177] * Stopping node "no-preload-317000"  ...
	I0223 13:31:37.746701   20144 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:37.806934   20144 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:31:37.806976   20144 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	W0223 13:31:37.806988   20144 stop.go:163] stop host returned error: ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:37.828278   20144 out.go:177] 
	W0223 13:31:37.849407   20144 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-317000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect no-preload-317000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:31:37.849433   20144 out.go:239] * 
	* 
	W0223 13:31:37.854219   20144 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:31:37.913275   20144 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p no-preload-317000 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "8a0c87970bc76bf12ec3ac564174990100e8e4a34380b4a89856e096fa4ac80d",
	        "Created": "2023-02-23T21:31:13.430219965Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (100.013536ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:31:38.116390   20183 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (15.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (100.017113ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:31:38.216631   20187 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-317000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "8a0c87970bc76bf12ec3ac564174990100e8e4a34380b4a89856e096fa4ac80d",
	        "Created": "2023-02-23T21:31:13.430219965Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (100.364313ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:31:38.642465   20197 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (64.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-317000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1
E0223 13:31:46.604860    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p no-preload-317000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1: exit status 80 (1m4.260776549s)

                                                
                                                
-- stdout --
	* [no-preload-317000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node no-preload-317000 in cluster no-preload-317000
	* Pulling base image ...
	* docker "no-preload-317000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "no-preload-317000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:31:38.686793   20201 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:31:38.686958   20201 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:31:38.686963   20201 out.go:309] Setting ErrFile to fd 2...
	I0223 13:31:38.686967   20201 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:31:38.687074   20201 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:31:38.688380   20201 out.go:303] Setting JSON to false
	I0223 13:31:38.706632   20201 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3673,"bootTime":1677184225,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:31:38.706709   20201 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:31:38.728426   20201 out.go:177] * [no-preload-317000] minikube v1.29.0 on Darwin 13.2
	I0223 13:31:38.771226   20201 notify.go:220] Checking for updates...
	I0223 13:31:38.793115   20201 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:31:38.815133   20201 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:31:38.835998   20201 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:31:38.857252   20201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:31:38.879168   20201 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:31:38.901135   20201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:31:38.922888   20201 config.go:182] Loaded profile config "no-preload-317000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:31:38.924902   20201 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:31:38.985812   20201 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:31:38.985946   20201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:31:39.127216   20201 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:31:39.034157863 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:31:39.170783   20201 out.go:177] * Using the docker driver based on existing profile
	I0223 13:31:39.191958   20201 start.go:296] selected driver: docker
	I0223 13:31:39.191982   20201 start.go:857] validating driver "docker" against &{Name:no-preload-317000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:no-preload-317000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mount
GID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:31:39.192091   20201 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:31:39.195921   20201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:31:39.336657   20201 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:31:39.243947901 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:31:39.336824   20201 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:31:39.336843   20201 cni.go:84] Creating CNI manager for ""
	I0223 13:31:39.336856   20201 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:31:39.336867   20201 start_flags.go:319] config:
	{Name:no-preload-317000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:no-preload-317000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:31:39.358652   20201 out.go:177] * Starting control plane node no-preload-317000 in cluster no-preload-317000
	I0223 13:31:39.381325   20201 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:31:39.402252   20201 out.go:177] * Pulling base image ...
	I0223 13:31:39.444334   20201 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:31:39.444359   20201 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:31:39.444561   20201 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/no-preload-317000/config.json ...
	I0223 13:31:39.444668   20201 cache.go:107] acquiring lock: {Name:mk6fbe3d88148a778f3bf80c9cdb08cb932d0ddb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.444669   20201 cache.go:107] acquiring lock: {Name:mk1823854fe2bada1dc0bd63300471cde3895c84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.445846   20201 cache.go:107] acquiring lock: {Name:mk9cd4d3edd7f650f6a7f63cc9c22405dac0be2e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.446000   20201 cache.go:107] acquiring lock: {Name:mk89b1fa49307a079ae04b05999d81e7952611d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.445805   20201 cache.go:107] acquiring lock: {Name:mk5c0168fe38dd3e5ec674df1b7a69fd6d5b0e0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.446070   20201 cache.go:107] acquiring lock: {Name:mk4f9e75e998d297f4001f728279e4bcf855bb51 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.446948   20201 cache.go:107] acquiring lock: {Name:mk638c39843f14049dbf512c0a5b834568a91030 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.446368   20201 cache.go:107] acquiring lock: {Name:mk621bc0e32f154e15df1e491d8291b27bd0b1ed Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.447487   20201 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0223 13:31:39.447502   20201 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 exists
	I0223 13:31:39.447505   20201 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 2.836752ms
	I0223 13:31:39.447523   20201 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 exists
	I0223 13:31:39.447535   20201 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0223 13:31:39.447531   20201 cache.go:96] cache image "registry.k8s.io/etcd:3.5.6-0" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0" took 1.941591ms
	I0223 13:31:39.447498   20201 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 exists
	I0223 13:31:39.447545   20201 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.6-0 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.6-0 succeeded
	I0223 13:31:39.447541   20201 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.26.1" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1" took 2.34967ms
	I0223 13:31:39.447551   20201 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.26.1" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1" took 1.675437ms
	I0223 13:31:39.447558   20201 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.26.1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.26.1 succeeded
	I0223 13:31:39.447555   20201 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.26.1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.26.1 succeeded
	I0223 13:31:39.447504   20201 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 exists
	I0223 13:31:39.447568   20201 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.26.1" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1" took 2.933252ms
	I0223 13:31:39.447576   20201 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.26.1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.26.1 succeeded
	I0223 13:31:39.447508   20201 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 exists
	I0223 13:31:39.447588   20201 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.26.1" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1" took 1.801974ms
	I0223 13:31:39.447594   20201 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.26.1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.26.1 succeeded
	I0223 13:31:39.447510   20201 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0223 13:31:39.447606   20201 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 1.642695ms
	I0223 13:31:39.447611   20201 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0223 13:31:39.447498   20201 cache.go:115] /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 exists
	I0223 13:31:39.447618   20201 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.9.3" -> "/Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3" took 1.845761ms
	I0223 13:31:39.447623   20201 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.9.3 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.9.3 succeeded
	I0223 13:31:39.447629   20201 cache.go:87] Successfully saved all images to host disk.
	I0223 13:31:39.500859   20201 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:31:39.500875   20201 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:31:39.500893   20201 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:31:39.500930   20201 start.go:364] acquiring machines lock for no-preload-317000: {Name:mkb232fc445eaf810e5edc9a0f7dd58d965890d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:31:39.501004   20201 start.go:368] acquired machines lock for "no-preload-317000" in 62.725µs
	I0223 13:31:39.501029   20201 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:31:39.501038   20201 fix.go:55] fixHost starting: 
	I0223 13:31:39.501278   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:39.555105   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:39.555157   20201 fix.go:103] recreateIfNeeded on no-preload-317000: state= err=unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:39.555184   20201 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:31:39.598585   20201 out.go:177] * docker "no-preload-317000" container is missing, will recreate.
	I0223 13:31:39.619678   20201 delete.go:124] DEMOLISHING no-preload-317000 ...
	I0223 13:31:39.619857   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:39.675442   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:31:39.675485   20201 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:39.675500   20201 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:39.675893   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:39.731890   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:39.731939   20201 delete.go:82] Unable to get host status for no-preload-317000, assuming it has already been deleted: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:39.732029   20201 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-317000
	W0223 13:31:39.786014   20201 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-317000 returned with exit code 1
	I0223 13:31:39.786051   20201 kic.go:367] could not find the container no-preload-317000 to remove it. will try anyways
	I0223 13:31:39.786133   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:39.840175   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:31:39.840228   20201 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:39.840305   20201 cli_runner.go:164] Run: docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0"
	W0223 13:31:39.894652   20201 cli_runner.go:211] docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:31:39.894679   20201 oci.go:641] error shutdown no-preload-317000: docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:40.896246   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:40.956178   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:40.956217   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:40.956230   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:40.956280   20201 retry.go:31] will retry after 267.424605ms: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:41.224029   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:41.282773   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:41.282811   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:41.282819   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:41.282838   20201 retry.go:31] will retry after 1.023113275s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:42.308334   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:42.366991   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:42.367037   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:42.367048   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:42.367068   20201 retry.go:31] will retry after 906.744815ms: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:43.275239   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:43.331132   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:43.331181   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:43.331189   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:43.331207   20201 retry.go:31] will retry after 859.567066ms: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:44.192700   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:44.249846   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:44.249890   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:44.249897   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:44.249926   20201 retry.go:31] will retry after 3.337424577s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:47.588249   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:47.646111   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:47.646151   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:47.646160   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:47.646180   20201 retry.go:31] will retry after 2.898152265s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:50.546890   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:50.602842   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:50.602884   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:50.602891   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:50.602920   20201 retry.go:31] will retry after 6.444250432s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:57.047720   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:31:57.105800   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:31:57.105839   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:31:57.105847   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:31:57.105872   20201 oci.go:88] couldn't shut down no-preload-317000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	 
	I0223 13:31:57.105946   20201 cli_runner.go:164] Run: docker rm -f -v no-preload-317000
	I0223 13:31:57.162288   20201 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-317000
	W0223 13:31:57.216969   20201 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-317000 returned with exit code 1
	I0223 13:31:57.217090   20201 cli_runner.go:164] Run: docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:31:57.272752   20201 cli_runner.go:164] Run: docker network rm no-preload-317000
	W0223 13:31:57.391501   20201 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:31:57.391520   20201 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:31:58.392109   20201 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:31:58.413904   20201 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:31:58.414097   20201 start.go:159] libmachine.API.Create for "no-preload-317000" (driver="docker")
	I0223 13:31:58.414126   20201 client.go:168] LocalClient.Create starting
	I0223 13:31:58.414352   20201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:31:58.414439   20201 main.go:141] libmachine: Decoding PEM data...
	I0223 13:31:58.414472   20201 main.go:141] libmachine: Parsing certificate...
	I0223 13:31:58.414601   20201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:31:58.414670   20201 main.go:141] libmachine: Decoding PEM data...
	I0223 13:31:58.414685   20201 main.go:141] libmachine: Parsing certificate...
	I0223 13:31:58.435408   20201 cli_runner.go:164] Run: docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:31:58.496854   20201 cli_runner.go:211] docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:31:58.496947   20201 network_create.go:281] running [docker network inspect no-preload-317000] to gather additional debugging logs...
	I0223 13:31:58.496971   20201 cli_runner.go:164] Run: docker network inspect no-preload-317000
	W0223 13:31:58.551572   20201 cli_runner.go:211] docker network inspect no-preload-317000 returned with exit code 1
	I0223 13:31:58.551596   20201 network_create.go:284] error running [docker network inspect no-preload-317000]: docker network inspect no-preload-317000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-317000
	I0223 13:31:58.551606   20201 network_create.go:286] output of [docker network inspect no-preload-317000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-317000
	
	** /stderr **
	I0223 13:31:58.551682   20201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:31:58.607909   20201 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:31:58.608265   20201 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000b76ad0}
	I0223 13:31:58.608278   20201 network_create.go:123] attempt to create docker network no-preload-317000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:31:58.608348   20201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000
	W0223 13:31:58.662089   20201 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000 returned with exit code 1
	W0223 13:31:58.662133   20201 network_create.go:148] failed to create docker network no-preload-317000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:31:58.662144   20201 network_create.go:115] failed to create docker network no-preload-317000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:31:58.663674   20201 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:31:58.663989   20201 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000b77920}
	I0223 13:31:58.664000   20201 network_create.go:123] attempt to create docker network no-preload-317000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:31:58.664068   20201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000
	I0223 13:31:58.752619   20201 network_create.go:107] docker network no-preload-317000 192.168.67.0/24 created
	I0223 13:31:58.752647   20201 kic.go:117] calculated static IP "192.168.67.2" for the "no-preload-317000" container
	I0223 13:31:58.752767   20201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:31:58.810267   20201 cli_runner.go:164] Run: docker volume create no-preload-317000 --label name.minikube.sigs.k8s.io=no-preload-317000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:31:58.864266   20201 oci.go:103] Successfully created a docker volume no-preload-317000
	I0223 13:31:58.864382   20201 cli_runner.go:164] Run: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:31:58.996889   20201 cli_runner.go:211] docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:31:58.996936   20201 client.go:171] LocalClient.Create took 582.780635ms
	I0223 13:32:00.997726   20201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:32:00.997861   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:01.055556   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:01.055649   20201 retry.go:31] will retry after 146.617586ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:01.203936   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:01.264521   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:01.264611   20201 retry.go:31] will retry after 390.581792ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:01.656922   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:01.714510   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:01.714594   20201 retry.go:31] will retry after 284.222145ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:02.001186   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:02.061563   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:02.061660   20201 retry.go:31] will retry after 865.894795ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:02.929912   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:02.989011   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:32:02.989122   20201 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:02.989138   20201 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:02.989202   20201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:32:02.989261   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:03.043488   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:03.043588   20201 retry.go:31] will retry after 206.654755ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:03.252659   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:03.313244   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:03.313339   20201 retry.go:31] will retry after 424.228484ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:03.739453   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:03.796609   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:03.796706   20201 retry.go:31] will retry after 839.776221ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:04.637758   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:04.698777   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:32:04.698877   20201 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:04.698889   20201 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:04.698894   20201 start.go:128] duration metric: createHost completed in 6.30659628s
	I0223 13:32:04.698980   20201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:32:04.699027   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:04.753995   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:04.754085   20201 retry.go:31] will retry after 229.89836ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:04.986324   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:05.043310   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:05.043389   20201 retry.go:31] will retry after 247.456441ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:05.292644   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:05.352139   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:05.352216   20201 retry.go:31] will retry after 596.914663ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:05.951549   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:06.012495   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:32:06.012595   20201 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:06.012617   20201 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:06.012680   20201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:32:06.012732   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:06.067900   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:06.067987   20201 retry.go:31] will retry after 299.544721ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:06.368812   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:06.426164   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:06.426242   20201 retry.go:31] will retry after 228.897601ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:06.657580   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:06.714420   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:06.714501   20201 retry.go:31] will retry after 382.537659ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:07.097413   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:07.157193   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:32:07.157291   20201 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:07.157307   20201 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:07.157311   20201 fix.go:57] fixHost completed within 27.654902303s
	I0223 13:32:07.157317   20201 start.go:83] releasing machines lock for "no-preload-317000", held for 27.654934556s
	W0223 13:32:07.157333   20201 start.go:691] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W0223 13:32:07.157465   20201 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:32:07.157472   20201 start.go:706] Will try again in 5 seconds ...
	I0223 13:32:12.159834   20201 start.go:364] acquiring machines lock for no-preload-317000: {Name:mkb232fc445eaf810e5edc9a0f7dd58d965890d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:32:12.160025   20201 start.go:368] acquired machines lock for "no-preload-317000" in 153.913µs
	I0223 13:32:12.160068   20201 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:32:12.160077   20201 fix.go:55] fixHost starting: 
	I0223 13:32:12.160493   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:12.218599   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:12.218646   20201 fix.go:103] recreateIfNeeded on no-preload-317000: state= err=unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:12.218656   20201 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:32:12.259973   20201 out.go:177] * docker "no-preload-317000" container is missing, will recreate.
	I0223 13:32:12.280894   20201 delete.go:124] DEMOLISHING no-preload-317000 ...
	I0223 13:32:12.281106   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:12.336893   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:32:12.336936   20201 stop.go:75] unable to get state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:12.336959   20201 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:12.337331   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:12.390600   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:12.390645   20201 delete.go:82] Unable to get host status for no-preload-317000, assuming it has already been deleted: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:12.390717   20201 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-317000
	W0223 13:32:12.444876   20201 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-317000 returned with exit code 1
	I0223 13:32:12.444904   20201 kic.go:367] could not find the container no-preload-317000 to remove it. will try anyways
	I0223 13:32:12.444977   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:12.502552   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	W0223 13:32:12.502593   20201 oci.go:84] error getting container status, will try to delete anyways: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:12.502683   20201 cli_runner.go:164] Run: docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0"
	W0223 13:32:12.557779   20201 cli_runner.go:211] docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:32:12.557808   20201 oci.go:641] error shutdown no-preload-317000: docker exec --privileged -t no-preload-317000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:13.559392   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:13.617868   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:13.617912   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:13.617921   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:32:13.617947   20201 retry.go:31] will retry after 434.399112ms: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:14.054650   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:14.112883   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:14.112931   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:14.112938   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:32:14.112975   20201 retry.go:31] will retry after 874.595204ms: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:14.988216   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:15.046419   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:15.046460   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:15.046467   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:32:15.046485   20201 retry.go:31] will retry after 1.642073166s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:16.688772   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:16.744256   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:16.744299   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:16.744306   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:32:16.744327   20201 retry.go:31] will retry after 1.537587298s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:18.284367   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:18.341017   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:18.341057   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:18.341064   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:32:18.341084   20201 retry.go:31] will retry after 3.27811924s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:21.620748   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:21.678558   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:21.678606   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:21.678616   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:32:21.678636   20201 retry.go:31] will retry after 4.031682883s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:25.712656   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:25.774361   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:25.774403   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:25.774409   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:32:25.774430   20201 retry.go:31] will retry after 5.884820968s: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:31.659713   20201 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:31.716173   20201 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:31.716216   20201 oci.go:653] temporary error verifying shutdown: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:31.716231   20201 oci.go:655] temporary error: container no-preload-317000 status is  but expect it to be exited
	I0223 13:32:31.716257   20201 oci.go:88] couldn't shut down no-preload-317000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	 
	I0223 13:32:31.716341   20201 cli_runner.go:164] Run: docker rm -f -v no-preload-317000
	I0223 13:32:31.772996   20201 cli_runner.go:164] Run: docker container inspect -f {{.Id}} no-preload-317000
	W0223 13:32:31.827001   20201 cli_runner.go:211] docker container inspect -f {{.Id}} no-preload-317000 returned with exit code 1
	I0223 13:32:31.827125   20201 cli_runner.go:164] Run: docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:32:31.881859   20201 cli_runner.go:164] Run: docker network rm no-preload-317000
	W0223 13:32:31.992926   20201 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:32:31.992946   20201 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:32:32.993854   20201 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:32:33.015959   20201 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:32:33.016143   20201 start.go:159] libmachine.API.Create for "no-preload-317000" (driver="docker")
	I0223 13:32:33.016179   20201 client.go:168] LocalClient.Create starting
	I0223 13:32:33.016392   20201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:32:33.016485   20201 main.go:141] libmachine: Decoding PEM data...
	I0223 13:32:33.016507   20201 main.go:141] libmachine: Parsing certificate...
	I0223 13:32:33.016611   20201 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:32:33.016674   20201 main.go:141] libmachine: Decoding PEM data...
	I0223 13:32:33.016689   20201 main.go:141] libmachine: Parsing certificate...
	I0223 13:32:33.037993   20201 cli_runner.go:164] Run: docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:32:33.098120   20201 cli_runner.go:211] docker network inspect no-preload-317000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:32:33.098206   20201 network_create.go:281] running [docker network inspect no-preload-317000] to gather additional debugging logs...
	I0223 13:32:33.098224   20201 cli_runner.go:164] Run: docker network inspect no-preload-317000
	W0223 13:32:33.153206   20201 cli_runner.go:211] docker network inspect no-preload-317000 returned with exit code 1
	I0223 13:32:33.153235   20201 network_create.go:284] error running [docker network inspect no-preload-317000]: docker network inspect no-preload-317000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: no-preload-317000
	I0223 13:32:33.153246   20201 network_create.go:286] output of [docker network inspect no-preload-317000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: no-preload-317000
	
	** /stderr **
	I0223 13:32:33.153322   20201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:32:33.210118   20201 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:32:33.211629   20201 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:32:33.213134   20201 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:32:33.213467   20201 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ef1ac0}
	I0223 13:32:33.213478   20201 network_create.go:123] attempt to create docker network no-preload-317000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:32:33.213561   20201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000
	W0223 13:32:33.268605   20201 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000 returned with exit code 1
	W0223 13:32:33.268641   20201 network_create.go:148] failed to create docker network no-preload-317000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:32:33.268659   20201 network_create.go:115] failed to create docker network no-preload-317000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:32:33.270064   20201 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:32:33.270393   20201 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00182d130}
	I0223 13:32:33.270403   20201 network_create.go:123] attempt to create docker network no-preload-317000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:32:33.270467   20201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-317000 no-preload-317000
	I0223 13:32:33.356985   20201 network_create.go:107] docker network no-preload-317000 192.168.85.0/24 created
	I0223 13:32:33.357014   20201 kic.go:117] calculated static IP "192.168.85.2" for the "no-preload-317000" container
	I0223 13:32:33.357136   20201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:32:33.414574   20201 cli_runner.go:164] Run: docker volume create no-preload-317000 --label name.minikube.sigs.k8s.io=no-preload-317000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:32:33.468772   20201 oci.go:103] Successfully created a docker volume no-preload-317000
	I0223 13:32:33.468902   20201 cli_runner.go:164] Run: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:32:33.605582   20201 cli_runner.go:211] docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:32:33.605623   20201 client.go:171] LocalClient.Create took 589.431635ms
	I0223 13:32:35.606716   20201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:32:35.606840   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:35.666288   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:35.666373   20201 retry.go:31] will retry after 192.143982ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:35.860922   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:35.920671   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:35.920773   20201 retry.go:31] will retry after 190.883347ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:36.113472   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:36.174499   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:36.174585   20201 retry.go:31] will retry after 303.50432ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:36.479873   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:36.538906   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:36.538992   20201 retry.go:31] will retry after 588.881435ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:37.130161   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:37.189505   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:32:37.189594   20201 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:37.189616   20201 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:37.189679   20201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:32:37.189724   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:37.244093   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:37.244177   20201 retry.go:31] will retry after 277.759447ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:37.522722   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:37.582152   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:37.582251   20201 retry.go:31] will retry after 447.722768ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:38.030274   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:38.091930   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:38.092014   20201 retry.go:31] will retry after 560.410447ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:38.653454   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:38.719694   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:32:38.719791   20201 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:38.719817   20201 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:38.719833   20201 start.go:128] duration metric: createHost completed in 5.725929658s
	I0223 13:32:38.719921   20201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:32:38.719974   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:38.773562   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:38.773646   20201 retry.go:31] will retry after 161.003471ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:38.936860   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:38.996538   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:38.996629   20201 retry.go:31] will retry after 366.241056ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:39.364169   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:39.420079   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:39.420175   20201 retry.go:31] will retry after 614.6515ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:40.037187   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:40.095291   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:40.095379   20201 retry.go:31] will retry after 560.881757ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:40.657585   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:40.715607   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:32:40.715694   20201 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:40.715708   20201 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:40.715777   20201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:32:40.715821   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:40.770394   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:40.770477   20201 retry.go:31] will retry after 240.584191ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:41.011816   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:41.071712   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:41.071801   20201 retry.go:31] will retry after 350.051297ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:41.424312   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:41.481363   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:41.481446   20201 retry.go:31] will retry after 407.343429ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:41.891265   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:41.950064   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	I0223 13:32:41.950148   20201 retry.go:31] will retry after 723.682351ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:42.676237   20201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000
	W0223 13:32:42.734207   20201 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000 returned with exit code 1
	W0223 13:32:42.734305   20201 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:42.734321   20201 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "no-preload-317000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-317000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	I0223 13:32:42.734326   20201 fix.go:57] fixHost completed within 30.574027542s
	I0223 13:32:42.734334   20201 start.go:83] releasing machines lock for "no-preload-317000", held for 30.574073699s
	W0223 13:32:42.734479   20201 out.go:239] * Failed to start docker container. Running "minikube delete -p no-preload-317000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p no-preload-317000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:32:42.777894   20201 out.go:177] 
	W0223 13:32:42.799200   20201 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for no-preload-317000 container: docker run --rm --name no-preload-317000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-317000 --entrypoint /usr/bin/test -v no-preload-317000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:32:42.799226   20201 out.go:239] * 
	* 
	W0223 13:32:42.800140   20201 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:32:42.882922   20201 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p no-preload-317000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.26.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "47921695fbdc3d37da5cf2e58f1b0e4c7e7aa31611274cdc8555f9e971d2863e",
	        "Created": "2023-02-23T21:32:33.321108511Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (101.903284ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:32:43.084329   20518 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (64.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-317000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "47921695fbdc3d37da5cf2e58f1b0e4c7e7aa31611274cdc8555f9e971d2863e",
	        "Created": "2023-02-23T21:32:33.321108511Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (100.544016ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:32:43.243522   20524 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-317000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-317000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-317000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (34.79406ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-317000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-317000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "47921695fbdc3d37da5cf2e58f1b0e4c7e7aa31611274cdc8555f9e971d2863e",
	        "Created": "2023-02-23T21:32:33.321108511Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (101.357428ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:32:43.438797   20531 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-317000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p no-preload-317000 "sudo crictl images -o json": exit status 80 (193.301813ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p no-preload-317000 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.26.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.6-0",
- 	"registry.k8s.io/kube-apiserver:v1.26.1",
- 	"registry.k8s.io/kube-controller-manager:v1.26.1",
- 	"registry.k8s.io/kube-proxy:v1.26.1",
- 	"registry.k8s.io/kube-scheduler:v1.26.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "47921695fbdc3d37da5cf2e58f1b0e4c7e7aa31611274cdc8555f9e971d2863e",
	        "Created": "2023-02-23T21:32:33.321108511Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (101.780655ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:32:43.793324   20541 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-317000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p no-preload-317000 --alsologtostderr -v=1: exit status 80 (194.188775ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:32:43.838173   20545 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:32:43.838346   20545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:32:43.838351   20545 out.go:309] Setting ErrFile to fd 2...
	I0223 13:32:43.838355   20545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:32:43.838463   20545 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:32:43.838775   20545 out.go:303] Setting JSON to false
	I0223 13:32:43.838791   20545 mustload.go:65] Loading cluster: no-preload-317000
	I0223 13:32:43.839047   20545 config.go:182] Loaded profile config "no-preload-317000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:32:43.839469   20545 cli_runner.go:164] Run: docker container inspect no-preload-317000 --format={{.State.Status}}
	W0223 13:32:43.894485   20545 cli_runner.go:211] docker container inspect no-preload-317000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:43.917515   20545 out.go:177] 
	W0223 13:32:43.939186   20545 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	X Exiting due to GUEST_STATUS: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000
	
	W0223 13:32:43.939214   20545 out.go:239] * 
	* 
	W0223 13:32:43.943776   20545 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:32:43.965146   20545 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p no-preload-317000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "47921695fbdc3d37da5cf2e58f1b0e4c7e7aa31611274cdc8555f9e971d2863e",
	        "Created": "2023-02-23T21:32:33.321108511Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (99.532409ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:32:44.147175   20551 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect no-preload-317000
helpers_test.go:235: (dbg) docker inspect no-preload-317000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "no-preload-317000",
	        "Id": "47921695fbdc3d37da5cf2e58f1b0e4c7e7aa31611274cdc8555f9e971d2863e",
	        "Created": "2023-02-23T21:32:33.321108511Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "no-preload-317000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-317000 -n no-preload-317000: exit status 7 (101.188374ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:32:44.308141   20557 status.go:249] status error: host: state: unknown state "no-preload-317000": docker container inspect no-preload-317000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: no-preload-317000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "no-preload-317000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (0.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-035000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
E0223 13:32:54.393329    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/skaffold-719000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-035000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: exit status 80 (42.996691805s)

                                                
                                                
-- stdout --
	* [embed-certs-035000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node embed-certs-035000 in cluster embed-certs-035000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-035000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:32:45.569741   20598 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:32:45.569901   20598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:32:45.569907   20598 out.go:309] Setting ErrFile to fd 2...
	I0223 13:32:45.569911   20598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:32:45.570024   20598 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:32:45.571425   20598 out.go:303] Setting JSON to false
	I0223 13:32:45.589758   20598 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3740,"bootTime":1677184225,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:32:45.589845   20598 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:32:45.611802   20598 out.go:177] * [embed-certs-035000] minikube v1.29.0 on Darwin 13.2
	I0223 13:32:45.653940   20598 notify.go:220] Checking for updates...
	I0223 13:32:45.675704   20598 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:32:45.696823   20598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:32:45.717828   20598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:32:45.739069   20598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:32:45.760689   20598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:32:45.781701   20598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:32:45.803546   20598 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:32:45.803715   20598 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:32:45.803840   20598 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:32:45.803902   20598 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:32:45.864980   20598 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:32:45.865161   20598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:32:46.007204   20598 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:32:45.915269695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:32:46.029119   20598 out.go:177] * Using the docker driver based on user configuration
	I0223 13:32:46.050723   20598 start.go:296] selected driver: docker
	I0223 13:32:46.050737   20598 start.go:857] validating driver "docker" against <nil>
	I0223 13:32:46.050766   20598 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:32:46.053274   20598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:32:46.195497   20598 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:32:46.103619626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:32:46.195635   20598 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:32:46.195801   20598 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:32:46.217631   20598 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:32:46.239260   20598 cni.go:84] Creating CNI manager for ""
	I0223 13:32:46.239396   20598 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:32:46.239413   20598 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 13:32:46.239427   20598 start_flags.go:319] config:
	{Name:embed-certs-035000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-035000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:32:46.261385   20598 out.go:177] * Starting control plane node embed-certs-035000 in cluster embed-certs-035000
	I0223 13:32:46.283125   20598 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:32:46.304240   20598 out.go:177] * Pulling base image ...
	I0223 13:32:46.346319   20598 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:32:46.346370   20598 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:32:46.346383   20598 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:32:46.346397   20598 cache.go:57] Caching tarball of preloaded images
	I0223 13:32:46.346521   20598 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:32:46.346530   20598 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:32:46.347212   20598 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/embed-certs-035000/config.json ...
	I0223 13:32:46.347291   20598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/embed-certs-035000/config.json: {Name:mk37ba39e9b049849e0845131c6e544be497edec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:32:46.405663   20598 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:32:46.405706   20598 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:32:46.405725   20598 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:32:46.405786   20598 start.go:364] acquiring machines lock for embed-certs-035000: {Name:mk109788415ddd73a83a349dd1a61647eb0703e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:32:46.405934   20598 start.go:368] acquired machines lock for "embed-certs-035000" in 136.279µs
	I0223 13:32:46.405968   20598 start.go:93] Provisioning new machine with config: &{Name:embed-certs-035000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-035000 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:32:46.406043   20598 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:32:46.448440   20598 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:32:46.448893   20598 start.go:159] libmachine.API.Create for "embed-certs-035000" (driver="docker")
	I0223 13:32:46.448944   20598 client.go:168] LocalClient.Create starting
	I0223 13:32:46.449257   20598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:32:46.449372   20598 main.go:141] libmachine: Decoding PEM data...
	I0223 13:32:46.449409   20598 main.go:141] libmachine: Parsing certificate...
	I0223 13:32:46.449534   20598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:32:46.449597   20598 main.go:141] libmachine: Decoding PEM data...
	I0223 13:32:46.449619   20598 main.go:141] libmachine: Parsing certificate...
	I0223 13:32:46.450582   20598 cli_runner.go:164] Run: docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:32:46.507937   20598 cli_runner.go:211] docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:32:46.508035   20598 network_create.go:281] running [docker network inspect embed-certs-035000] to gather additional debugging logs...
	I0223 13:32:46.508052   20598 cli_runner.go:164] Run: docker network inspect embed-certs-035000
	W0223 13:32:46.561699   20598 cli_runner.go:211] docker network inspect embed-certs-035000 returned with exit code 1
	I0223 13:32:46.561724   20598 network_create.go:284] error running [docker network inspect embed-certs-035000]: docker network inspect embed-certs-035000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-035000
	I0223 13:32:46.561734   20598 network_create.go:286] output of [docker network inspect embed-certs-035000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-035000
	
	** /stderr **
	I0223 13:32:46.561828   20598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:32:46.618137   20598 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:32:46.618459   20598 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00117ac00}
	I0223 13:32:46.618472   20598 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:32:46.618538   20598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	W0223 13:32:46.673339   20598 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000 returned with exit code 1
	W0223 13:32:46.673377   20598 network_create.go:148] failed to create docker network embed-certs-035000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:32:46.673391   20598 network_create.go:115] failed to create docker network embed-certs-035000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:32:46.674776   20598 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:32:46.675115   20598 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000155700}
	I0223 13:32:46.675129   20598 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:32:46.675199   20598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	I0223 13:32:46.761396   20598 network_create.go:107] docker network embed-certs-035000 192.168.67.0/24 created
	I0223 13:32:46.761436   20598 kic.go:117] calculated static IP "192.168.67.2" for the "embed-certs-035000" container
	I0223 13:32:46.761542   20598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:32:46.820260   20598 cli_runner.go:164] Run: docker volume create embed-certs-035000 --label name.minikube.sigs.k8s.io=embed-certs-035000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:32:46.875791   20598 oci.go:103] Successfully created a docker volume embed-certs-035000
	I0223 13:32:46.875903   20598 cli_runner.go:164] Run: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:32:47.194821   20598 cli_runner.go:211] docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:32:47.194865   20598 client.go:171] LocalClient.Create took 745.911306ms
	I0223 13:32:49.196393   20598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:32:49.196490   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:49.253313   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:49.253431   20598 retry.go:31] will retry after 330.631214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:49.586247   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:49.644192   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:49.644280   20598 retry.go:31] will retry after 294.648361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:49.941288   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:49.997479   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:49.997561   20598 retry.go:31] will retry after 554.080828ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:50.551907   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:50.611845   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:32:50.611931   20598 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:32:50.611950   20598 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:50.612017   20598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:32:50.612062   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:50.665971   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:50.666058   20598 retry.go:31] will retry after 282.430368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:50.949895   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:51.009402   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:51.009484   20598 retry.go:31] will retry after 345.329444ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:51.356975   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:51.413947   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:51.414029   20598 retry.go:31] will retry after 824.611642ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:52.241066   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:52.300464   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:32:52.300569   20598 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:32:52.300586   20598 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:52.300593   20598 start.go:128] duration metric: createHost completed in 5.894528037s
	I0223 13:32:52.300599   20598 start.go:83] releasing machines lock for "embed-certs-035000", held for 5.894639943s
	W0223 13:32:52.300614   20598 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:32:52.301041   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:52.355168   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:52.355219   20598 delete.go:82] Unable to get host status for embed-certs-035000, assuming it has already been deleted: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	W0223 13:32:52.355353   20598 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:32:52.355362   20598 start.go:706] Will try again in 5 seconds ...
	I0223 13:32:57.357526   20598 start.go:364] acquiring machines lock for embed-certs-035000: {Name:mk109788415ddd73a83a349dd1a61647eb0703e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:32:57.357800   20598 start.go:368] acquired machines lock for "embed-certs-035000" in 125.135µs
	I0223 13:32:57.357846   20598 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:32:57.357859   20598 fix.go:55] fixHost starting: 
	I0223 13:32:57.358308   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:57.416131   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:57.416172   20598 fix.go:103] recreateIfNeeded on embed-certs-035000: state= err=unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.416194   20598 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:32:57.437649   20598 out.go:177] * docker "embed-certs-035000" container is missing, will recreate.
	I0223 13:32:57.480197   20598 delete.go:124] DEMOLISHING embed-certs-035000 ...
	I0223 13:32:57.480383   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:57.536017   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:32:57.536060   20598 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.536073   20598 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.536470   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:57.591257   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:57.591306   20598 delete.go:82] Unable to get host status for embed-certs-035000, assuming it has already been deleted: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.591393   20598 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-035000
	W0223 13:32:57.645287   20598 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-035000 returned with exit code 1
	I0223 13:32:57.645319   20598 kic.go:367] could not find the container embed-certs-035000 to remove it. will try anyways
	I0223 13:32:57.645415   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:57.699147   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:32:57.699189   20598 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.699270   20598 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0"
	W0223 13:32:57.754399   20598 cli_runner.go:211] docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:32:57.754429   20598 oci.go:641] error shutdown embed-certs-035000: docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:58.754815   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:58.813589   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:58.813632   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:58.813642   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:32:58.813663   20598 retry.go:31] will retry after 677.639959ms: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:59.493629   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:59.553687   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:59.553732   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:59.553740   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:32:59.553761   20598 retry.go:31] will retry after 1.086607878s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:00.641068   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:00.700289   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:00.700333   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:00.700341   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:00.700361   20598 retry.go:31] will retry after 1.177334734s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:01.877966   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:01.933097   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:01.933139   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:01.933146   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:01.933168   20598 retry.go:31] will retry after 1.865315659s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:03.800778   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:03.861730   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:03.861772   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:03.861785   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:03.861812   20598 retry.go:31] will retry after 1.743641955s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:05.605753   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:05.666296   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:05.666348   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:05.666356   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:05.666378   20598 retry.go:31] will retry after 2.796936104s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:08.463859   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:08.519540   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:08.519587   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:08.519596   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:08.519615   20598 retry.go:31] will retry after 3.211300582s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:11.732460   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:11.790273   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:11.790325   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:11.790333   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:11.790354   20598 retry.go:31] will retry after 6.765654074s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:18.556329   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:18.614324   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:18.614376   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:18.614385   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:18.614415   20598 oci.go:88] couldn't shut down embed-certs-035000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	 
	I0223 13:33:18.614484   20598 cli_runner.go:164] Run: docker rm -f -v embed-certs-035000
	I0223 13:33:18.684727   20598 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-035000
	W0223 13:33:18.739551   20598 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-035000 returned with exit code 1
	I0223 13:33:18.739664   20598 cli_runner.go:164] Run: docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:33:18.796486   20598 cli_runner.go:164] Run: docker network rm embed-certs-035000
	W0223 13:33:18.910106   20598 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:33:18.910126   20598 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:33:19.912256   20598 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:33:19.934256   20598 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:33:19.934458   20598 start.go:159] libmachine.API.Create for "embed-certs-035000" (driver="docker")
	I0223 13:33:19.934500   20598 client.go:168] LocalClient.Create starting
	I0223 13:33:19.934682   20598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:33:19.934782   20598 main.go:141] libmachine: Decoding PEM data...
	I0223 13:33:19.934812   20598 main.go:141] libmachine: Parsing certificate...
	I0223 13:33:19.934917   20598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:33:19.934986   20598 main.go:141] libmachine: Decoding PEM data...
	I0223 13:33:19.935007   20598 main.go:141] libmachine: Parsing certificate...
	I0223 13:33:19.955494   20598 cli_runner.go:164] Run: docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:33:20.015825   20598 cli_runner.go:211] docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:33:20.015926   20598 network_create.go:281] running [docker network inspect embed-certs-035000] to gather additional debugging logs...
	I0223 13:33:20.015942   20598 cli_runner.go:164] Run: docker network inspect embed-certs-035000
	W0223 13:33:20.071356   20598 cli_runner.go:211] docker network inspect embed-certs-035000 returned with exit code 1
	I0223 13:33:20.071385   20598 network_create.go:284] error running [docker network inspect embed-certs-035000]: docker network inspect embed-certs-035000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-035000
	I0223 13:33:20.071396   20598 network_create.go:286] output of [docker network inspect embed-certs-035000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-035000
	
	** /stderr **
	I0223 13:33:20.071472   20598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:33:20.128356   20598 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:20.129866   20598 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:20.131188   20598 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:20.132742   20598 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:20.134311   20598 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:20.134695   20598 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010833a0}
	I0223 13:33:20.134706   20598 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0223 13:33:20.134783   20598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	I0223 13:33:20.222314   20598 network_create.go:107] docker network embed-certs-035000 192.168.94.0/24 created
	I0223 13:33:20.222343   20598 kic.go:117] calculated static IP "192.168.94.2" for the "embed-certs-035000" container
	I0223 13:33:20.222448   20598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:33:20.279332   20598 cli_runner.go:164] Run: docker volume create embed-certs-035000 --label name.minikube.sigs.k8s.io=embed-certs-035000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:33:20.334467   20598 oci.go:103] Successfully created a docker volume embed-certs-035000
	I0223 13:33:20.334582   20598 cli_runner.go:164] Run: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:33:20.471275   20598 cli_runner.go:211] docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:33:20.471322   20598 client.go:171] LocalClient.Create took 536.814531ms
	I0223 13:33:22.472768   20598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:33:22.472987   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:22.537823   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:22.537909   20598 retry.go:31] will retry after 325.822619ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:22.864231   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:22.924279   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:22.924368   20598 retry.go:31] will retry after 399.745271ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:23.325869   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:23.380803   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:23.380898   20598 retry.go:31] will retry after 490.094817ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:23.873166   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:23.930842   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:33:23.930934   20598 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:33:23.930948   20598 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:23.931035   20598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:33:23.931084   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:23.985193   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:23.985279   20598 retry.go:31] will retry after 321.896648ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:24.307492   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:24.366262   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:24.366357   20598 retry.go:31] will retry after 312.200958ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:24.680954   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:24.738127   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:24.738214   20598 retry.go:31] will retry after 554.900178ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:25.295529   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:25.354458   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:33:25.354550   20598 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:33:25.354575   20598 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:25.354588   20598 start.go:128] duration metric: createHost completed in 5.44226317s
	I0223 13:33:25.354659   20598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:33:25.354711   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:25.409753   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:25.409852   20598 retry.go:31] will retry after 249.410673ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:25.661672   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:25.718147   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:25.718226   20598 retry.go:31] will retry after 499.302253ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:26.217923   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:26.271608   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:26.271695   20598 retry.go:31] will retry after 356.062363ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:26.628592   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:26.687898   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:33:26.687989   20598 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:33:26.688003   20598 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:26.688067   20598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:33:26.688118   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:26.743400   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:26.743484   20598 retry.go:31] will retry after 167.677277ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:26.913516   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:26.972400   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:26.972497   20598 retry.go:31] will retry after 448.456318ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:27.421336   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:27.481927   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:33:27.482014   20598 retry.go:31] will retry after 808.112773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:28.292520   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:33:28.350704   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:33:28.350794   20598 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:33:28.350827   20598 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:28.350833   20598 fix.go:57] fixHost completed within 30.992909944s
	I0223 13:33:28.350840   20598 start.go:83] releasing machines lock for "embed-certs-035000", held for 30.992961059s
	W0223 13:33:28.351009   20598 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-035000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-035000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:33:28.393151   20598 out.go:177] 
	W0223 13:33:28.414590   20598 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:33:28.414619   20598 out.go:239] * 
	* 
	W0223 13:33:28.415989   20598 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:33:28.499313   20598 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p embed-certs-035000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "a372a47b51617062f18b8e814f588850dcb23c180c76a36d294e168da45cea20",
	        "Created": "2023-02-23T21:33:20.186149087Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (100.071179ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:28.692476   20954 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (43.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-942000
version_upgrade_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p stopped-upgrade-942000: exit status 85 (377.331475ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-235000 sudo                                | bridge-235000          | jenkins | v1.29.0 | 23 Feb 23 13:27 PST |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p bridge-235000 sudo                                | bridge-235000          | jenkins | v1.29.0 | 23 Feb 23 13:27 PST |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p bridge-235000 sudo                                | bridge-235000          | jenkins | v1.29.0 | 23 Feb 23 13:27 PST |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p bridge-235000 sudo find                           | bridge-235000          | jenkins | v1.29.0 | 23 Feb 23 13:27 PST |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p bridge-235000 sudo crio                           | bridge-235000          | jenkins | v1.29.0 | 23 Feb 23 13:27 PST |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p bridge-235000                                     | bridge-235000          | jenkins | v1.29.0 | 23 Feb 23 13:27 PST | 23 Feb 23 13:27 PST |
	| start   | -p kubenet-235000                                    | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:27 PST |                     |
	|         | --memory=3072                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                        |         |         |                     |                     |
	|         | --network-plugin=kubenet                             |                        |         |         |                     |                     |
	|         | --driver=docker                                      |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/nsswitch.conf                                   |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/hosts                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/resolv.conf                                     |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo crictl                        | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | pods                                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo crictl                        | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | ps --all                                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo find                          | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/cni -type f -exec sh -c                         |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo ip a s                        | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	| ssh     | -p kubenet-235000 sudo ip r s                        | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | iptables-save                                        |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | iptables -t nat -L -n -v                             |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl status kubelet --all                       |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo docker                        | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo cat                           | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo                               | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo find                          | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p kubenet-235000 sudo crio                          | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p kubenet-235000                                    | kubenet-235000         | jenkins | v1.29.0 | 23 Feb 23 13:28 PST | 23 Feb 23 13:28 PST |
	| start   | -p old-k8s-version-639000                            | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:28 PST |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=docker                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-639000      | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:29 PST | 23 Feb 23 13:29 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                        |         |         |                     |                     |
	| stop    | -p old-k8s-version-639000                            | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:29 PST |                     |
	|         | --alsologtostderr -v=3                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-639000           | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:29 PST | 23 Feb 23 13:29 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                        |         |         |                     |                     |
	| start   | -p old-k8s-version-639000                            | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:29 PST |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --kvm-network=default                                |                        |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                        |                        |         |         |                     |                     |
	|         | --disable-driver-mounts                              |                        |         |         |                     |                     |
	|         | --keep-context=false                                 |                        |         |         |                     |                     |
	|         | --driver=docker                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                         |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-639000 sudo                       | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:30 PST |                     |
	|         | crictl images -o json                                |                        |         |         |                     |                     |
	| pause   | -p old-k8s-version-639000                            | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:30 PST |                     |
	|         | --alsologtostderr -v=1                               |                        |         |         |                     |                     |
	| delete  | -p old-k8s-version-639000                            | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:30 PST | 23 Feb 23 13:30 PST |
	| delete  | -p old-k8s-version-639000                            | old-k8s-version-639000 | jenkins | v1.29.0 | 23 Feb 23 13:30 PST | 23 Feb 23 13:30 PST |
	| start   | -p no-preload-317000                                 | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:30 PST |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                          |                        |         |         |                     |                     |
	|         | --driver=docker                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                        |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-317000           | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:31 PST | 23 Feb 23 13:31 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4     |                        |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain               |                        |         |         |                     |                     |
	| stop    | -p no-preload-317000                                 | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:31 PST |                     |
	|         | --alsologtostderr -v=3                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-317000                | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:31 PST | 23 Feb 23 13:31 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4    |                        |         |         |                     |                     |
	| start   | -p no-preload-317000                                 | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:31 PST |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                    |                        |         |         |                     |                     |
	|         | --wait=true --preload=false                          |                        |         |         |                     |                     |
	|         | --driver=docker                                      |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                        |         |         |                     |                     |
	| ssh     | -p no-preload-317000 sudo                            | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:32 PST |                     |
	|         | crictl images -o json                                |                        |         |         |                     |                     |
	| pause   | -p no-preload-317000                                 | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:32 PST |                     |
	|         | --alsologtostderr -v=1                               |                        |         |         |                     |                     |
	| delete  | -p no-preload-317000                                 | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:32 PST | 23 Feb 23 13:32 PST |
	| delete  | -p no-preload-317000                                 | no-preload-317000      | jenkins | v1.29.0 | 23 Feb 23 13:32 PST | 23 Feb 23 13:32 PST |
	| start   | -p embed-certs-035000                                | embed-certs-035000     | jenkins | v1.29.0 | 23 Feb 23 13:32 PST |                     |
	|         | --memory=2200                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --embed-certs --driver=docker                        |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.26.1                         |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 13:32:45
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 13:32:45.569741   20598 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:32:45.569901   20598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:32:45.569907   20598 out.go:309] Setting ErrFile to fd 2...
	I0223 13:32:45.569911   20598 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:32:45.570024   20598 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:32:45.571425   20598 out.go:303] Setting JSON to false
	I0223 13:32:45.589758   20598 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3740,"bootTime":1677184225,"procs":389,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:32:45.589845   20598 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:32:45.611802   20598 out.go:177] * [embed-certs-035000] minikube v1.29.0 on Darwin 13.2
	I0223 13:32:45.653940   20598 notify.go:220] Checking for updates...
	I0223 13:32:45.675704   20598 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:32:45.696823   20598 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:32:45.717828   20598 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:32:45.739069   20598 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:32:45.760689   20598 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:32:45.781701   20598 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:32:45.803546   20598 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:32:45.803715   20598 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:32:45.803840   20598 config.go:182] Loaded profile config "stopped-upgrade-942000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:32:45.803902   20598 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:32:45.864980   20598 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:32:45.865161   20598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:32:46.007204   20598 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:32:45.915269695 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:32:46.029119   20598 out.go:177] * Using the docker driver based on user configuration
	I0223 13:32:46.050723   20598 start.go:296] selected driver: docker
	I0223 13:32:46.050737   20598 start.go:857] validating driver "docker" against <nil>
	I0223 13:32:46.050766   20598 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:32:46.053274   20598 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:32:46.195497   20598 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:32:46.103619626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:32:46.195635   20598 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:32:46.195801   20598 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:32:46.217631   20598 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:32:46.239260   20598 cni.go:84] Creating CNI manager for ""
	I0223 13:32:46.239396   20598 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:32:46.239413   20598 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 13:32:46.239427   20598 start_flags.go:319] config:
	{Name:embed-certs-035000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-035000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:32:46.261385   20598 out.go:177] * Starting control plane node embed-certs-035000 in cluster embed-certs-035000
	I0223 13:32:46.283125   20598 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:32:46.304240   20598 out.go:177] * Pulling base image ...
	I0223 13:32:46.346319   20598 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:32:46.346370   20598 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:32:46.346383   20598 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:32:46.346397   20598 cache.go:57] Caching tarball of preloaded images
	I0223 13:32:46.346521   20598 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:32:46.346530   20598 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:32:46.347212   20598 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/embed-certs-035000/config.json ...
	I0223 13:32:46.347291   20598 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/embed-certs-035000/config.json: {Name:mk37ba39e9b049849e0845131c6e544be497edec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:32:46.405663   20598 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:32:46.405706   20598 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:32:46.405725   20598 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:32:46.405786   20598 start.go:364] acquiring machines lock for embed-certs-035000: {Name:mk109788415ddd73a83a349dd1a61647eb0703e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:32:46.405934   20598 start.go:368] acquired machines lock for "embed-certs-035000" in 136.279µs
	I0223 13:32:46.405968   20598 start.go:93] Provisioning new machine with config: &{Name:embed-certs-035000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-035000 Namespace:default APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:32:46.406043   20598 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:32:46.448440   20598 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:32:46.448893   20598 start.go:159] libmachine.API.Create for "embed-certs-035000" (driver="docker")
	I0223 13:32:46.448944   20598 client.go:168] LocalClient.Create starting
	I0223 13:32:46.449257   20598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:32:46.449372   20598 main.go:141] libmachine: Decoding PEM data...
	I0223 13:32:46.449409   20598 main.go:141] libmachine: Parsing certificate...
	I0223 13:32:46.449534   20598 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:32:46.449597   20598 main.go:141] libmachine: Decoding PEM data...
	I0223 13:32:46.449619   20598 main.go:141] libmachine: Parsing certificate...
	I0223 13:32:46.450582   20598 cli_runner.go:164] Run: docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:32:46.507937   20598 cli_runner.go:211] docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:32:46.508035   20598 network_create.go:281] running [docker network inspect embed-certs-035000] to gather additional debugging logs...
	I0223 13:32:46.508052   20598 cli_runner.go:164] Run: docker network inspect embed-certs-035000
	W0223 13:32:46.561699   20598 cli_runner.go:211] docker network inspect embed-certs-035000 returned with exit code 1
	I0223 13:32:46.561724   20598 network_create.go:284] error running [docker network inspect embed-certs-035000]: docker network inspect embed-certs-035000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-035000
	I0223 13:32:46.561734   20598 network_create.go:286] output of [docker network inspect embed-certs-035000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-035000
	
	** /stderr **
	I0223 13:32:46.561828   20598 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:32:46.618137   20598 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:32:46.618459   20598 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00117ac00}
	I0223 13:32:46.618472   20598 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:32:46.618538   20598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	W0223 13:32:46.673339   20598 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000 returned with exit code 1
	W0223 13:32:46.673377   20598 network_create.go:148] failed to create docker network embed-certs-035000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:32:46.673391   20598 network_create.go:115] failed to create docker network embed-certs-035000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:32:46.674776   20598 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:32:46.675115   20598 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000155700}
	I0223 13:32:46.675129   20598 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:32:46.675199   20598 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	I0223 13:32:46.761396   20598 network_create.go:107] docker network embed-certs-035000 192.168.67.0/24 created
	I0223 13:32:46.761436   20598 kic.go:117] calculated static IP "192.168.67.2" for the "embed-certs-035000" container
	I0223 13:32:46.761542   20598 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:32:46.820260   20598 cli_runner.go:164] Run: docker volume create embed-certs-035000 --label name.minikube.sigs.k8s.io=embed-certs-035000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:32:46.875791   20598 oci.go:103] Successfully created a docker volume embed-certs-035000
	I0223 13:32:46.875903   20598 cli_runner.go:164] Run: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:32:47.194821   20598 cli_runner.go:211] docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:32:47.194865   20598 client.go:171] LocalClient.Create took 745.911306ms
	I0223 13:32:49.196393   20598 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:32:49.196490   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:49.253313   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:49.253431   20598 retry.go:31] will retry after 330.631214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:49.586247   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:49.644192   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:49.644280   20598 retry.go:31] will retry after 294.648361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:49.941288   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:49.997479   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:49.997561   20598 retry.go:31] will retry after 554.080828ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:50.551907   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:50.611845   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:32:50.611931   20598 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:32:50.611950   20598 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:50.612017   20598 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:32:50.612062   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:50.665971   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:50.666058   20598 retry.go:31] will retry after 282.430368ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:50.949895   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:51.009402   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:51.009484   20598 retry.go:31] will retry after 345.329444ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:51.356975   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:51.413947   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:32:51.414029   20598 retry.go:31] will retry after 824.611642ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:52.241066   20598 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:32:52.300464   20598 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:32:52.300569   20598 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:32:52.300586   20598 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:52.300593   20598 start.go:128] duration metric: createHost completed in 5.894528037s
	I0223 13:32:52.300599   20598 start.go:83] releasing machines lock for "embed-certs-035000", held for 5.894639943s
	W0223 13:32:52.300614   20598 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:32:52.301041   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:52.355168   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:52.355219   20598 delete.go:82] Unable to get host status for embed-certs-035000, assuming it has already been deleted: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	W0223 13:32:52.355353   20598 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:32:52.355362   20598 start.go:706] Will try again in 5 seconds ...
	I0223 13:32:57.357526   20598 start.go:364] acquiring machines lock for embed-certs-035000: {Name:mk109788415ddd73a83a349dd1a61647eb0703e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:32:57.357800   20598 start.go:368] acquired machines lock for "embed-certs-035000" in 125.135µs
	I0223 13:32:57.357846   20598 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:32:57.357859   20598 fix.go:55] fixHost starting: 
	I0223 13:32:57.358308   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:57.416131   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:57.416172   20598 fix.go:103] recreateIfNeeded on embed-certs-035000: state= err=unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.416194   20598 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:32:57.437649   20598 out.go:177] * docker "embed-certs-035000" container is missing, will recreate.
	I0223 13:32:57.480197   20598 delete.go:124] DEMOLISHING embed-certs-035000 ...
	I0223 13:32:57.480383   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:57.536017   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:32:57.536060   20598 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.536073   20598 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.536470   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:57.591257   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:57.591306   20598 delete.go:82] Unable to get host status for embed-certs-035000, assuming it has already been deleted: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.591393   20598 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-035000
	W0223 13:32:57.645287   20598 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-035000 returned with exit code 1
	I0223 13:32:57.645319   20598 kic.go:367] could not find the container embed-certs-035000 to remove it. will try anyways
	I0223 13:32:57.645415   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:57.699147   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:32:57.699189   20598 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:57.699270   20598 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0"
	W0223 13:32:57.754399   20598 cli_runner.go:211] docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:32:57.754429   20598 oci.go:641] error shutdown embed-certs-035000: docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:58.754815   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:58.813589   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:58.813632   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:58.813642   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:32:58.813663   20598 retry.go:31] will retry after 677.639959ms: couldn't verify container is exited. %!v(MISSING): unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:59.493629   20598 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:32:59.553687   20598 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:32:59.553732   20598 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:32:59.553740   20598 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:32:59.553761   20598 retry.go:31] will retry after 1.086607878s: couldn't verify container is exited. %!v(MISSING): unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	* 
	* The control plane node "m01" does not exist.
	  To start a cluster, run: "minikube start -p stopped-upgrade-942000"

                                                
                                                
-- /stdout --
version_upgrade_test.go:216: `minikube logs` after upgrade to HEAD from v1.9.0 failed: exit status 85
--- FAIL: TestStoppedBinaryUpgrade/MinikubeLogs (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-571000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-571000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: exit status 80 (40.052598824s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-571000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node default-k8s-diff-port-571000 in cluster default-k8s-diff-port-571000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-diff-port-571000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:33:07.505690   20786 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:33:07.505844   20786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:33:07.505849   20786 out.go:309] Setting ErrFile to fd 2...
	I0223 13:33:07.505853   20786 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:33:07.505963   20786 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:33:07.507372   20786 out.go:303] Setting JSON to false
	I0223 13:33:07.525810   20786 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3762,"bootTime":1677184225,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:33:07.525896   20786 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:33:07.547951   20786 out.go:177] * [default-k8s-diff-port-571000] minikube v1.29.0 on Darwin 13.2
	I0223 13:33:07.591936   20786 notify.go:220] Checking for updates...
	I0223 13:33:07.613986   20786 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:33:07.636064   20786 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:33:07.657676   20786 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:33:07.699942   20786 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:33:07.742773   20786 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:33:07.764915   20786 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:33:07.787433   20786 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:33:07.787628   20786 config.go:182] Loaded profile config "embed-certs-035000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:33:07.787759   20786 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:33:07.787839   20786 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:33:07.849316   20786 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:33:07.849433   20786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:33:07.991429   20786 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:33:07.899265923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:33:08.013337   20786 out.go:177] * Using the docker driver based on user configuration
	I0223 13:33:08.035211   20786 start.go:296] selected driver: docker
	I0223 13:33:08.035241   20786 start.go:857] validating driver "docker" against <nil>
	I0223 13:33:08.035265   20786 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:33:08.039206   20786 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:33:08.179565   20786 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:33:08.089208249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:33:08.179692   20786 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 13:33:08.179864   20786 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:33:08.201425   20786 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:33:08.223590   20786 cni.go:84] Creating CNI manager for ""
	I0223 13:33:08.223628   20786 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:33:08.223642   20786 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 13:33:08.223657   20786 start_flags.go:319] config:
	{Name:default-k8s-diff-port-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:33:08.266278   20786 out.go:177] * Starting control plane node default-k8s-diff-port-571000 in cluster default-k8s-diff-port-571000
	I0223 13:33:08.289573   20786 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:33:08.311304   20786 out.go:177] * Pulling base image ...
	I0223 13:33:08.353631   20786 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:33:08.353733   20786 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:33:08.353748   20786 cache.go:57] Caching tarball of preloaded images
	I0223 13:33:08.353728   20786 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:33:08.353989   20786 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:33:08.354009   20786 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:33:08.354767   20786 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/default-k8s-diff-port-571000/config.json ...
	I0223 13:33:08.355001   20786 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/default-k8s-diff-port-571000/config.json: {Name:mk6e9f98e10bb6441441d24460cddbf406d1a28d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:33:08.410544   20786 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:33:08.410562   20786 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:33:08.410582   20786 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:33:08.410624   20786 start.go:364] acquiring machines lock for default-k8s-diff-port-571000: {Name:mk040bb7b39c6c5d5f1dfd7a7376050165aac48b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:33:08.410787   20786 start.go:368] acquired machines lock for "default-k8s-diff-port-571000" in 151.772µs
	I0223 13:33:08.410823   20786 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-571000 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disabl
eOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:33:08.410906   20786 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:33:08.454463   20786 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:33:08.454965   20786 start.go:159] libmachine.API.Create for "default-k8s-diff-port-571000" (driver="docker")
	I0223 13:33:08.455041   20786 client.go:168] LocalClient.Create starting
	I0223 13:33:08.455224   20786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:33:08.455311   20786 main.go:141] libmachine: Decoding PEM data...
	I0223 13:33:08.455349   20786 main.go:141] libmachine: Parsing certificate...
	I0223 13:33:08.455467   20786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:33:08.455542   20786 main.go:141] libmachine: Decoding PEM data...
	I0223 13:33:08.455559   20786 main.go:141] libmachine: Parsing certificate...
	I0223 13:33:08.456416   20786 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:33:08.513679   20786 cli_runner.go:211] docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:33:08.513794   20786 network_create.go:281] running [docker network inspect default-k8s-diff-port-571000] to gather additional debugging logs...
	I0223 13:33:08.513828   20786 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000
	W0223 13:33:08.568429   20786 cli_runner.go:211] docker network inspect default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:08.568454   20786 network_create.go:284] error running [docker network inspect default-k8s-diff-port-571000]: docker network inspect default-k8s-diff-port-571000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-571000
	I0223 13:33:08.568473   20786 network_create.go:286] output of [docker network inspect default-k8s-diff-port-571000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-571000
	
	** /stderr **
	I0223 13:33:08.568575   20786 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:33:08.626081   20786 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:08.627525   20786 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:08.629136   20786 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:08.629488   20786 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f352c0}
	I0223 13:33:08.629501   20786 network_create.go:123] attempt to create docker network default-k8s-diff-port-571000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:33:08.629576   20786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000
	W0223 13:33:08.684271   20786 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:33:08.684306   20786 network_create.go:148] failed to create docker network default-k8s-diff-port-571000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:33:08.684323   20786 network_create.go:115] failed to create docker network default-k8s-diff-port-571000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:33:08.685643   20786 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:08.685957   20786 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00053a490}
	I0223 13:33:08.685967   20786 network_create.go:123] attempt to create docker network default-k8s-diff-port-571000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:33:08.686054   20786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000
	I0223 13:33:08.775121   20786 network_create.go:107] docker network default-k8s-diff-port-571000 192.168.85.0/24 created
	I0223 13:33:08.775158   20786 kic.go:117] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-571000" container
	I0223 13:33:08.775279   20786 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:33:08.833171   20786 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-571000 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:33:08.887758   20786 oci.go:103] Successfully created a docker volume default-k8s-diff-port-571000
	I0223 13:33:08.887877   20786 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:33:09.107430   20786 cli_runner.go:211] docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:33:09.107482   20786 client.go:171] LocalClient.Create took 652.430392ms
	I0223 13:33:11.109919   20786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:33:11.110069   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:11.166507   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:11.166647   20786 retry.go:31] will retry after 213.1648ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:11.382183   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:11.440251   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:11.440348   20786 retry.go:31] will retry after 264.747315ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:11.707020   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:11.769128   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:11.769228   20786 retry.go:31] will retry after 332.614334ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:12.104233   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:12.164196   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:33:12.164292   20786 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:33:12.164314   20786 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:12.164374   20786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:33:12.164445   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:12.218194   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:12.218280   20786 retry.go:31] will retry after 193.332959ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:12.412598   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:12.471337   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:12.471422   20786 retry.go:31] will retry after 200.421772ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:12.674286   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:12.732947   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:12.733045   20786 retry.go:31] will retry after 726.359241ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:13.460776   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:13.518373   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:33:13.518464   20786 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:33:13.518478   20786 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:13.518493   20786 start.go:128] duration metric: createHost completed in 5.107572045s
	I0223 13:33:13.518499   20786 start.go:83] releasing machines lock for "default-k8s-diff-port-571000", held for 5.107693581s
	W0223 13:33:13.518515   20786 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:33:13.518954   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:13.572850   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:13.572902   20786 delete.go:82] Unable to get host status for default-k8s-diff-port-571000, assuming it has already been deleted: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	W0223 13:33:13.573040   20786 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:33:13.573050   20786 start.go:706] Will try again in 5 seconds ...
	I0223 13:33:18.573293   20786 start.go:364] acquiring machines lock for default-k8s-diff-port-571000: {Name:mk040bb7b39c6c5d5f1dfd7a7376050165aac48b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:33:18.573378   20786 start.go:368] acquired machines lock for "default-k8s-diff-port-571000" in 66.165µs
	I0223 13:33:18.573401   20786 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:33:18.573409   20786 fix.go:55] fixHost starting: 
	I0223 13:33:18.573677   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:18.631691   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:18.631735   20786 fix.go:103] recreateIfNeeded on default-k8s-diff-port-571000: state= err=unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:18.631753   20786 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:33:18.653328   20786 out.go:177] * docker "default-k8s-diff-port-571000" container is missing, will recreate.
	I0223 13:33:18.674065   20786 delete.go:124] DEMOLISHING default-k8s-diff-port-571000 ...
	I0223 13:33:18.674314   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:18.732184   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:18.732231   20786 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:18.732245   20786 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:18.732625   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:18.789027   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:18.789077   20786 delete.go:82] Unable to get host status for default-k8s-diff-port-571000, assuming it has already been deleted: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:18.789159   20786 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-571000
	W0223 13:33:18.846400   20786 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:18.846437   20786 kic.go:367] could not find the container default-k8s-diff-port-571000 to remove it. will try anyways
	I0223 13:33:18.846513   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:18.901267   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:18.901316   20786 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:18.901412   20786 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0"
	W0223 13:33:18.956379   20786 cli_runner.go:211] docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:33:18.956407   20786 oci.go:641] error shutdown default-k8s-diff-port-571000: docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:19.956728   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:20.015659   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:20.015707   20786 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:20.015716   20786 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:33:20.015737   20786 retry.go:31] will retry after 630.920674ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:20.649105   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:20.706650   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:20.706700   20786 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:20.706707   20786 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:33:20.706728   20786 retry.go:31] will retry after 506.427124ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:21.214486   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:21.272908   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:21.272960   20786 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:21.272969   20786 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:33:21.272989   20786 retry.go:31] will retry after 938.300596ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:22.213754   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:22.274842   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:22.274894   20786 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:22.274906   20786 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:33:22.274926   20786 retry.go:31] will retry after 914.650244ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:23.191952   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:23.250535   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:23.250579   20786 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:23.250588   20786 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:33:23.250607   20786 retry.go:31] will retry after 2.903872361s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:26.154943   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:26.215122   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:26.215172   20786 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:26.215180   20786 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:33:26.215199   20786 retry.go:31] will retry after 4.677801008s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:30.894287   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:30.953236   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:30.953285   20786 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:30.953293   20786 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:33:30.953321   20786 retry.go:31] will retry after 6.087343095s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:37.042070   20786 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:37.103935   20786 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:37.103977   20786 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:37.103984   20786 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:33:37.104009   20786 oci.go:88] couldn't shut down default-k8s-diff-port-571000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	 
	I0223 13:33:37.104102   20786 cli_runner.go:164] Run: docker rm -f -v default-k8s-diff-port-571000
	I0223 13:33:37.160212   20786 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-571000
	W0223 13:33:37.215212   20786 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:37.215326   20786 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:33:37.271630   20786 cli_runner.go:164] Run: docker network rm default-k8s-diff-port-571000
	W0223 13:33:37.381249   20786 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:33:37.381269   20786 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:33:38.382524   20786 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:33:38.404624   20786 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:33:38.404875   20786 start.go:159] libmachine.API.Create for "default-k8s-diff-port-571000" (driver="docker")
	I0223 13:33:38.404926   20786 client.go:168] LocalClient.Create starting
	I0223 13:33:38.405130   20786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:33:38.405220   20786 main.go:141] libmachine: Decoding PEM data...
	I0223 13:33:38.405249   20786 main.go:141] libmachine: Parsing certificate...
	I0223 13:33:38.405341   20786 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:33:38.405411   20786 main.go:141] libmachine: Decoding PEM data...
	I0223 13:33:38.405428   20786 main.go:141] libmachine: Parsing certificate...
	I0223 13:33:38.427064   20786 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:33:38.488596   20786 cli_runner.go:211] docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:33:38.488693   20786 network_create.go:281] running [docker network inspect default-k8s-diff-port-571000] to gather additional debugging logs...
	I0223 13:33:38.488714   20786 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000
	W0223 13:33:38.543303   20786 cli_runner.go:211] docker network inspect default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:38.543334   20786 network_create.go:284] error running [docker network inspect default-k8s-diff-port-571000]: docker network inspect default-k8s-diff-port-571000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-571000
	I0223 13:33:38.543347   20786 network_create.go:286] output of [docker network inspect default-k8s-diff-port-571000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-571000
	
	** /stderr **
	I0223 13:33:38.543432   20786 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:33:38.600216   20786 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:38.600518   20786 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000bcb560}
	I0223 13:33:38.600532   20786 network_create.go:123] attempt to create docker network default-k8s-diff-port-571000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:33:38.600607   20786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000
	W0223 13:33:38.655173   20786 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:33:38.655207   20786 network_create.go:148] failed to create docker network default-k8s-diff-port-571000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:33:38.655226   20786 network_create.go:115] failed to create docker network default-k8s-diff-port-571000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:33:38.656785   20786 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:33:38.657109   20786 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00117e730}
	I0223 13:33:38.657119   20786 network_create.go:123] attempt to create docker network default-k8s-diff-port-571000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:33:38.657186   20786 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000
	I0223 13:33:38.745484   20786 network_create.go:107] docker network default-k8s-diff-port-571000 192.168.67.0/24 created
	I0223 13:33:38.745518   20786 kic.go:117] calculated static IP "192.168.67.2" for the "default-k8s-diff-port-571000" container
	I0223 13:33:38.745639   20786 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:33:38.802727   20786 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-571000 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:33:38.856815   20786 oci.go:103] Successfully created a docker volume default-k8s-diff-port-571000
	I0223 13:33:38.856935   20786 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:33:38.994624   20786 cli_runner.go:211] docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:33:38.994661   20786 client.go:171] LocalClient.Create took 589.726902ms
	I0223 13:33:40.995314   20786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:33:40.995455   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:41.053260   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:41.053354   20786 retry.go:31] will retry after 216.871679ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:41.272543   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:41.330096   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:41.330187   20786 retry.go:31] will retry after 375.451623ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:41.708014   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:41.767394   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:41.767487   20786 retry.go:31] will retry after 576.630147ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:42.345244   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:42.403083   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:33:42.403179   20786 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:33:42.403193   20786 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:42.403252   20786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:33:42.403317   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:42.457530   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:42.457631   20786 retry.go:31] will retry after 126.982683ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:42.585226   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:42.641959   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:42.642049   20786 retry.go:31] will retry after 365.456978ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:43.009860   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:43.069959   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:43.070051   20786 retry.go:31] will retry after 795.087019ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:43.867442   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:43.925683   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:33:43.925786   20786 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:33:43.925800   20786 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:43.925807   20786 start.go:128] duration metric: createHost completed in 5.543202211s
	I0223 13:33:43.925882   20786 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:33:43.925934   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:43.980085   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:43.980171   20786 retry.go:31] will retry after 374.017607ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:44.356552   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:44.414702   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:44.414788   20786 retry.go:31] will retry after 379.099315ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:44.794361   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:44.854292   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:44.854381   20786 retry.go:31] will retry after 366.167091ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:45.222993   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:45.284408   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:45.284500   20786 retry.go:31] will retry after 579.12371ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:45.864604   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:45.922668   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:33:45.922761   20786 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:33:45.922774   20786 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:45.922844   20786 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:33:45.922902   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:45.978929   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:45.979023   20786 retry.go:31] will retry after 187.804407ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:46.169243   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:46.227310   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:46.227398   20786 retry.go:31] will retry after 308.664379ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:46.538423   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:46.596892   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:33:46.596985   20786 retry.go:31] will retry after 705.113138ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:47.304521   20786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:33:47.363345   20786 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:33:47.363444   20786 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:33:47.363460   20786 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:47.363465   20786 fix.go:57] fixHost completed within 28.790001204s
	I0223 13:33:47.363472   20786 start.go:83] releasing machines lock for "default-k8s-diff-port-571000", held for 28.790032792s
	W0223 13:33:47.363619   20786 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-diff-port-571000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-diff-port-571000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:33:47.405579   20786 out.go:177] 
	W0223 13:33:47.426976   20786 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:33:47.427007   20786 out.go:239] * 
	* 
	W0223 13:33:47.428312   20786 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:33:47.490866   20786 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-571000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "2da02a4cb38be8fbd53edbe78461b2a8a58de1dc7405d6775b86167bbd5fdb3f",
	        "Created": "2023-02-23T21:33:38.708852464Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (100.035224ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:47.684774   21078 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-035000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-035000 create -f testdata/busybox.yaml: exit status 1 (35.435014ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-035000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-035000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "a372a47b51617062f18b8e814f588850dcb23c180c76a36d294e168da45cea20",
	        "Created": "2023-02-23T21:33:20.186149087Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (100.229356ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:28.887050   20961 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "a372a47b51617062f18b8e814f588850dcb23c180c76a36d294e168da45cea20",
	        "Created": "2023-02-23T21:33:20.186149087Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (100.017056ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:29.046603   20967 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-035000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-035000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-035000 describe deploy/metrics-server -n kube-system: exit status 1 (35.081712ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-035000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-035000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "a372a47b51617062f18b8e814f588850dcb23c180c76a36d294e168da45cea20",
	        "Created": "2023-02-23T21:33:20.186149087Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (99.425029ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:29.464953   20978 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-035000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p embed-certs-035000 --alsologtostderr -v=3: exit status 82 (18.804042504s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-035000"  ...
	* Stopping node "embed-certs-035000"  ...
	* Stopping node "embed-certs-035000"  ...
	* Stopping node "embed-certs-035000"  ...
	* Stopping node "embed-certs-035000"  ...
	* Stopping node "embed-certs-035000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:33:29.509424   20982 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:33:29.509622   20982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:33:29.509628   20982 out.go:309] Setting ErrFile to fd 2...
	I0223 13:33:29.509632   20982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:33:29.509737   20982 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:33:29.510069   20982 out.go:303] Setting JSON to false
	I0223 13:33:29.510219   20982 mustload.go:65] Loading cluster: embed-certs-035000
	I0223 13:33:29.510497   20982 config.go:182] Loaded profile config "embed-certs-035000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:33:29.510558   20982 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/embed-certs-035000/config.json ...
	I0223 13:33:29.510836   20982 mustload.go:65] Loading cluster: embed-certs-035000
	I0223 13:33:29.510935   20982 config.go:182] Loaded profile config "embed-certs-035000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:33:29.510967   20982 stop.go:39] StopHost: embed-certs-035000
	I0223 13:33:29.532822   20982 out.go:177] * Stopping node "embed-certs-035000"  ...
	I0223 13:33:29.574831   20982 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:29.631808   20982 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:29.631859   20982 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	W0223 13:33:29.631876   20982 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:29.631924   20982 retry.go:31] will retry after 1.03941757s: docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:30.673474   20982 stop.go:39] StopHost: embed-certs-035000
	I0223 13:33:30.696870   20982 out.go:177] * Stopping node "embed-certs-035000"  ...
	I0223 13:33:30.739504   20982 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:30.796974   20982 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:30.797019   20982 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	W0223 13:33:30.797035   20982 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:30.797052   20982 retry.go:31] will retry after 1.827737484s: docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:32.626420   20982 stop.go:39] StopHost: embed-certs-035000
	I0223 13:33:32.651646   20982 out.go:177] * Stopping node "embed-certs-035000"  ...
	I0223 13:33:32.695940   20982 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:32.751935   20982 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:32.751982   20982 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	W0223 13:33:32.751998   20982 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:32.752021   20982 retry.go:31] will retry after 3.347397225s: docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:36.101590   20982 stop.go:39] StopHost: embed-certs-035000
	I0223 13:33:36.124056   20982 out.go:177] * Stopping node "embed-certs-035000"  ...
	I0223 13:33:36.166522   20982 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:36.225247   20982 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:36.225293   20982 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	W0223 13:33:36.225308   20982 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:36.225322   20982 retry.go:31] will retry after 4.076416953s: docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:40.302792   20982 stop.go:39] StopHost: embed-certs-035000
	I0223 13:33:40.323803   20982 out.go:177] * Stopping node "embed-certs-035000"  ...
	I0223 13:33:40.345219   20982 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:40.405515   20982 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:40.405557   20982 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	W0223 13:33:40.405578   20982 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:40.405593   20982 retry.go:31] will retry after 7.570560826s: docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:47.976281   20982 stop.go:39] StopHost: embed-certs-035000
	I0223 13:33:47.998549   20982 out.go:177] * Stopping node "embed-certs-035000"  ...
	I0223 13:33:48.040923   20982 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:48.096662   20982 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:48.096708   20982 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	W0223 13:33:48.096721   20982 stop.go:163] stop host returned error: ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:48.118157   20982 out.go:177] 
	W0223 13:33:48.139390   20982 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-035000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect embed-certs-035000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:33:48.139418   20982 out.go:239] * 
	* 
	W0223 13:33:48.165221   20982 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:33:48.225063   20982 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p embed-certs-035000 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "a372a47b51617062f18b8e814f588850dcb23c180c76a36d294e168da45cea20",
	        "Created": "2023-02-23T21:33:20.186149087Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (114.400679ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:48.446451   21103 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (18.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-571000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-571000 create -f testdata/busybox.yaml: exit status 1 (34.257989ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-571000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context default-k8s-diff-port-571000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "2da02a4cb38be8fbd53edbe78461b2a8a58de1dc7405d6775b86167bbd5fdb3f",
	        "Created": "2023-02-23T21:33:38.708852464Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (100.592002ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:47.878404   21085 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "2da02a4cb38be8fbd53edbe78461b2a8a58de1dc7405d6775b86167bbd5fdb3f",
	        "Created": "2023-02-23T21:33:38.708852464Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (119.172191ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:48.056650   21091 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (0.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-571000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-571000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-571000 describe deploy/metrics-server -n kube-system: exit status 1 (36.416772ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-571000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-571000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "2da02a4cb38be8fbd53edbe78461b2a8a58de1dc7405d6775b86167bbd5fdb3f",
	        "Created": "2023-02-23T21:33:38.708852464Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (128.389345ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:48.645688   21113 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (102.559135ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:48.548970   21108 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-035000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "a372a47b51617062f18b8e814f588850dcb23c180c76a36d294e168da45cea20",
	        "Created": "2023-02-23T21:33:20.186149087Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.94.0/24",
	                    "Gateway": "192.168.94.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (100.409596ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:33:49.034433   21128 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-571000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-571000 --alsologtostderr -v=3: exit status 82 (11.793817521s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-571000"  ...
	* Stopping node "default-k8s-diff-port-571000"  ...
	* Stopping node "default-k8s-diff-port-571000"  ...
	* Stopping node "default-k8s-diff-port-571000"  ...
	* Stopping node "default-k8s-diff-port-571000"  ...
	* Stopping node "default-k8s-diff-port-571000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:33:48.691794   21121 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:33:48.691994   21121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:33:48.692000   21121 out.go:309] Setting ErrFile to fd 2...
	I0223 13:33:48.692004   21121 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:33:48.692118   21121 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:33:48.692445   21121 out.go:303] Setting JSON to false
	I0223 13:33:48.692584   21121 mustload.go:65] Loading cluster: default-k8s-diff-port-571000
	I0223 13:33:48.692868   21121 config.go:182] Loaded profile config "default-k8s-diff-port-571000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:33:48.692938   21121 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/default-k8s-diff-port-571000/config.json ...
	I0223 13:33:48.693220   21121 mustload.go:65] Loading cluster: default-k8s-diff-port-571000
	I0223 13:33:48.693322   21121 config.go:182] Loaded profile config "default-k8s-diff-port-571000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:33:48.693356   21121 stop.go:39] StopHost: default-k8s-diff-port-571000
	I0223 13:33:48.714649   21121 out.go:177] * Stopping node "default-k8s-diff-port-571000"  ...
	I0223 13:33:48.758218   21121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:48.848486   21121 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:48.848550   21121 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	W0223 13:33:48.848570   21121 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:48.848611   21121 retry.go:31] will retry after 644.790658ms: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:49.494856   21121 stop.go:39] StopHost: default-k8s-diff-port-571000
	I0223 13:33:49.516569   21121 out.go:177] * Stopping node "default-k8s-diff-port-571000"  ...
	I0223 13:33:49.580726   21121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:49.636604   21121 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:49.636647   21121 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	W0223 13:33:49.636662   21121 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:49.636679   21121 retry.go:31] will retry after 2.220511895s: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:51.859317   21121 stop.go:39] StopHost: default-k8s-diff-port-571000
	I0223 13:33:51.882736   21121 out.go:177] * Stopping node "default-k8s-diff-port-571000"  ...
	I0223 13:33:51.925422   21121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:51.982003   21121 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:51.982043   21121 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	W0223 13:33:51.982064   21121 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:51.982083   21121 retry.go:31] will retry after 2.910427746s: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:54.892841   21121 stop.go:39] StopHost: default-k8s-diff-port-571000
	I0223 13:33:54.915222   21121 out.go:177] * Stopping node "default-k8s-diff-port-571000"  ...
	I0223 13:33:54.937103   21121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:54.993292   21121 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:54.993339   21121 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	W0223 13:33:54.993360   21121 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:54.993378   21121 retry.go:31] will retry after 2.028002722s: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:57.023501   21121 stop.go:39] StopHost: default-k8s-diff-port-571000
	I0223 13:33:57.048641   21121 out.go:177] * Stopping node "default-k8s-diff-port-571000"  ...
	I0223 13:33:57.090668   21121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:33:57.146646   21121 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:57.146684   21121 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	W0223 13:33:57.146695   21121 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:33:57.146712   21121 retry.go:31] will retry after 3.019356864s: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:00.168360   21121 stop.go:39] StopHost: default-k8s-diff-port-571000
	I0223 13:34:00.190714   21121 out.go:177] * Stopping node "default-k8s-diff-port-571000"  ...
	I0223 13:34:00.232436   21121 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:00.292369   21121 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:34:00.292407   21121 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	W0223 13:34:00.292420   21121 stop.go:163] stop host returned error: ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:00.313087   21121 out.go:177] 
	W0223 13:34:00.334485   21121 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-diff-port-571000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect default-k8s-diff-port-571000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:34:00.334535   21121 out.go:239] * 
	* 
	W0223 13:34:00.338957   21121 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:34:00.397384   21121 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p default-k8s-diff-port-571000 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "2da02a4cb38be8fbd53edbe78461b2a8a58de1dc7405d6775b86167bbd5fdb3f",
	        "Created": "2023-02-23T21:33:38.708852464Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (100.417994ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:00.600226   21208 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (58.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-035000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-035000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1: exit status 80 (57.977382597s)

                                                
                                                
-- stdout --
	* [embed-certs-035000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node embed-certs-035000 in cluster embed-certs-035000
	* Pulling base image ...
	* docker "embed-certs-035000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "embed-certs-035000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:33:49.078150   21132 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:33:49.078341   21132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:33:49.078346   21132 out.go:309] Setting ErrFile to fd 2...
	I0223 13:33:49.078354   21132 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:33:49.078466   21132 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:33:49.079829   21132 out.go:303] Setting JSON to false
	I0223 13:33:49.098275   21132 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3804,"bootTime":1677184225,"procs":390,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:33:49.098359   21132 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:33:49.120666   21132 out.go:177] * [embed-certs-035000] minikube v1.29.0 on Darwin 13.2
	I0223 13:33:49.163318   21132 notify.go:220] Checking for updates...
	I0223 13:33:49.163342   21132 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:33:49.185375   21132 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:33:49.207150   21132 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:33:49.228149   21132 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:33:49.249301   21132 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:33:49.271101   21132 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:33:49.292205   21132 config.go:182] Loaded profile config "embed-certs-035000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:33:49.292526   21132 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:33:49.352365   21132 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:33:49.352522   21132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:33:49.495159   21132 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:33:49.402612358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:33:49.537427   21132 out.go:177] * Using the docker driver based on existing profile
	I0223 13:33:49.601204   21132 start.go:296] selected driver: docker
	I0223 13:33:49.601219   21132 start.go:857] validating driver "docker" against &{Name:embed-certs-035000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-035000 Namespace:default APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:33:49.601289   21132 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:33:49.604058   21132 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:33:49.745346   21132 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:33:49.654010696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:33:49.745476   21132 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:33:49.745493   21132 cni.go:84] Creating CNI manager for ""
	I0223 13:33:49.745506   21132 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:33:49.745525   21132 start_flags.go:319] config:
	{Name:embed-certs-035000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:embed-certs-035000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:33:49.788160   21132 out.go:177] * Starting control plane node embed-certs-035000 in cluster embed-certs-035000
	I0223 13:33:49.808879   21132 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:33:49.830072   21132 out.go:177] * Pulling base image ...
	I0223 13:33:49.871974   21132 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:33:49.872041   21132 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:33:49.872063   21132 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:33:49.872081   21132 cache.go:57] Caching tarball of preloaded images
	I0223 13:33:49.872302   21132 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:33:49.872321   21132 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:33:49.873257   21132 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/embed-certs-035000/config.json ...
	I0223 13:33:49.929727   21132 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:33:49.929748   21132 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:33:49.929767   21132 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:33:49.929820   21132 start.go:364] acquiring machines lock for embed-certs-035000: {Name:mk109788415ddd73a83a349dd1a61647eb0703e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:33:49.929921   21132 start.go:368] acquired machines lock for "embed-certs-035000" in 82.311µs
	I0223 13:33:49.929961   21132 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:33:49.929969   21132 fix.go:55] fixHost starting: 
	I0223 13:33:49.930202   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:49.985467   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:49.985527   21132 fix.go:103] recreateIfNeeded on embed-certs-035000: state= err=unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:49.985550   21132 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:33:50.029175   21132 out.go:177] * docker "embed-certs-035000" container is missing, will recreate.
	I0223 13:33:50.049825   21132 delete.go:124] DEMOLISHING embed-certs-035000 ...
	I0223 13:33:50.050020   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:50.106145   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:50.106190   21132 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:50.106204   21132 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:50.106589   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:50.160833   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:50.160879   21132 delete.go:82] Unable to get host status for embed-certs-035000, assuming it has already been deleted: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:50.160971   21132 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-035000
	W0223 13:33:50.215840   21132 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-035000 returned with exit code 1
	I0223 13:33:50.215870   21132 kic.go:367] could not find the container embed-certs-035000 to remove it. will try anyways
	I0223 13:33:50.215941   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:50.271558   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:33:50.271610   21132 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:50.271694   21132 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0"
	W0223 13:33:50.327427   21132 cli_runner.go:211] docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:33:50.327462   21132 oci.go:641] error shutdown embed-certs-035000: docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:51.329820   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:51.389407   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:51.389455   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:51.389464   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:51.389514   21132 retry.go:31] will retry after 364.112597ms: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:51.754622   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:51.814055   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:51.814101   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:51.814110   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:51.814131   21132 retry.go:31] will retry after 675.058117ms: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:52.491492   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:52.549774   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:52.549816   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:52.549824   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:52.549844   21132 retry.go:31] will retry after 715.86424ms: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:53.267786   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:53.326823   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:53.326867   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:53.326876   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:53.326895   21132 retry.go:31] will retry after 1.432317708s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:54.761593   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:54.819011   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:54.819063   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:54.819077   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:54.819097   21132 retry.go:31] will retry after 3.17214623s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:57.992387   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:33:58.048966   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:33:58.049019   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:33:58.049027   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:33:58.049048   21132 retry.go:31] will retry after 2.962071531s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:01.011749   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:01.067949   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:01.067995   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:01.068004   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:01.068033   21132 retry.go:31] will retry after 5.136211441s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:06.206654   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:06.265852   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:06.265897   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:06.265905   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:06.265940   21132 oci.go:88] couldn't shut down embed-certs-035000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	 
	I0223 13:34:06.266025   21132 cli_runner.go:164] Run: docker rm -f -v embed-certs-035000
	I0223 13:34:06.322672   21132 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-035000
	W0223 13:34:06.377025   21132 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-035000 returned with exit code 1
	I0223 13:34:06.377150   21132 cli_runner.go:164] Run: docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:06.432406   21132 cli_runner.go:164] Run: docker network rm embed-certs-035000
	W0223 13:34:06.544800   21132 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:34:06.544819   21132 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:34:07.545307   21132 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:34:07.567477   21132 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:34:07.567647   21132 start.go:159] libmachine.API.Create for "embed-certs-035000" (driver="docker")
	I0223 13:34:07.567715   21132 client.go:168] LocalClient.Create starting
	I0223 13:34:07.567937   21132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:34:07.568038   21132 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:07.568072   21132 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:07.568204   21132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:34:07.568289   21132 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:07.568320   21132 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:07.569033   21132 cli_runner.go:164] Run: docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:34:07.626305   21132 cli_runner.go:211] docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:34:07.626393   21132 network_create.go:281] running [docker network inspect embed-certs-035000] to gather additional debugging logs...
	I0223 13:34:07.626412   21132 cli_runner.go:164] Run: docker network inspect embed-certs-035000
	W0223 13:34:07.680194   21132 cli_runner.go:211] docker network inspect embed-certs-035000 returned with exit code 1
	I0223 13:34:07.680221   21132 network_create.go:284] error running [docker network inspect embed-certs-035000]: docker network inspect embed-certs-035000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-035000
	I0223 13:34:07.680234   21132 network_create.go:286] output of [docker network inspect embed-certs-035000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-035000
	
	** /stderr **
	I0223 13:34:07.680319   21132 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:07.736799   21132 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:07.737139   21132 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0010747f0}
	I0223 13:34:07.737152   21132 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:34:07.737224   21132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	W0223 13:34:07.792787   21132 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000 returned with exit code 1
	W0223 13:34:07.792827   21132 network_create.go:148] failed to create docker network embed-certs-035000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:34:07.792845   21132 network_create.go:115] failed to create docker network embed-certs-035000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:34:07.794311   21132 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:07.794625   21132 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001075640}
	I0223 13:34:07.794635   21132 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:34:07.794694   21132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	W0223 13:34:07.848955   21132 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000 returned with exit code 1
	W0223 13:34:07.848985   21132 network_create.go:148] failed to create docker network embed-certs-035000 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:34:07.849001   21132 network_create.go:115] failed to create docker network embed-certs-035000 192.168.67.0/24, will retry: subnet is taken
	I0223 13:34:07.850550   21132 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:07.850867   21132 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000aff010}
	I0223 13:34:07.850879   21132 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:34:07.850969   21132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	W0223 13:34:07.905192   21132 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000 returned with exit code 1
	W0223 13:34:07.905228   21132 network_create.go:148] failed to create docker network embed-certs-035000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:34:07.905241   21132 network_create.go:115] failed to create docker network embed-certs-035000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:34:07.906792   21132 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:07.907089   21132 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000affe60}
	I0223 13:34:07.907100   21132 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:34:07.907174   21132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	I0223 13:34:07.994293   21132 network_create.go:107] docker network embed-certs-035000 192.168.85.0/24 created
	I0223 13:34:07.994325   21132 kic.go:117] calculated static IP "192.168.85.2" for the "embed-certs-035000" container
	I0223 13:34:07.994454   21132 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:34:08.052023   21132 cli_runner.go:164] Run: docker volume create embed-certs-035000 --label name.minikube.sigs.k8s.io=embed-certs-035000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:34:08.105656   21132 oci.go:103] Successfully created a docker volume embed-certs-035000
	I0223 13:34:08.105775   21132 cli_runner.go:164] Run: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:34:08.238433   21132 cli_runner.go:211] docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:34:08.238485   21132 client.go:171] LocalClient.Create took 670.755962ms
	I0223 13:34:10.238793   21132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:10.238906   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:10.298764   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:10.298859   21132 retry.go:31] will retry after 300.071506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:10.601236   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:10.660207   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:10.660292   21132 retry.go:31] will retry after 436.733853ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:11.098098   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:11.155131   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:11.155225   21132 retry.go:31] will retry after 638.508953ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:11.796128   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:11.854907   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:34:11.855006   21132 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:11.855018   21132 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:11.855073   21132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:11.855128   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:11.911219   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:11.911306   21132 retry.go:31] will retry after 260.759098ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:12.173541   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:12.229988   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:12.230083   21132 retry.go:31] will retry after 227.860968ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:12.459327   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:12.516716   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:12.516799   21132 retry.go:31] will retry after 779.420092ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:13.296913   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:13.356576   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:13.356659   21132 retry.go:31] will retry after 463.198714ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:13.820333   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:13.879148   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:34:13.879246   21132 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:13.879258   21132 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:13.879262   21132 start.go:128] duration metric: createHost completed in 6.333914995s
	I0223 13:34:13.879330   21132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:13.879402   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:13.934504   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:13.934590   21132 retry.go:31] will retry after 125.187618ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:14.062063   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:14.117467   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:14.117545   21132 retry.go:31] will retry after 326.905703ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:14.446901   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:14.506676   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:14.506756   21132 retry.go:31] will retry after 705.587802ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:15.213552   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:15.274737   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:34:15.274828   21132 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:15.274843   21132 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:15.274899   21132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:15.274957   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:15.330486   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:15.330570   21132 retry.go:31] will retry after 345.207182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:15.676506   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:15.733727   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:15.733810   21132 retry.go:31] will retry after 541.918818ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:16.278146   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:16.336086   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:16.336166   21132 retry.go:31] will retry after 602.471583ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:16.939440   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:16.997826   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:34:16.997913   21132 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:16.997932   21132 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:16.997936   21132 fix.go:57] fixHost completed within 27.067918427s
	I0223 13:34:16.997943   21132 start.go:83] releasing machines lock for "embed-certs-035000", held for 27.067965139s
	W0223 13:34:16.997959   21132 start.go:691] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W0223 13:34:16.998091   21132 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:34:16.998099   21132 start.go:706] Will try again in 5 seconds ...
	I0223 13:34:21.998513   21132 start.go:364] acquiring machines lock for embed-certs-035000: {Name:mk109788415ddd73a83a349dd1a61647eb0703e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:34:21.998691   21132 start.go:368] acquired machines lock for "embed-certs-035000" in 142.938µs
	I0223 13:34:21.998750   21132 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:34:21.998759   21132 fix.go:55] fixHost starting: 
	I0223 13:34:21.999203   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:22.055443   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:22.055485   21132 fix.go:103] recreateIfNeeded on embed-certs-035000: state= err=unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:22.055494   21132 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:34:22.077319   21132 out.go:177] * docker "embed-certs-035000" container is missing, will recreate.
	I0223 13:34:22.119206   21132 delete.go:124] DEMOLISHING embed-certs-035000 ...
	I0223 13:34:22.119479   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:22.175454   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:34:22.175513   21132 stop.go:75] unable to get state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:22.175528   21132 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:22.175917   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:22.230388   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:22.230436   21132 delete.go:82] Unable to get host status for embed-certs-035000, assuming it has already been deleted: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:22.230524   21132 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-035000
	W0223 13:34:22.284317   21132 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-035000 returned with exit code 1
	I0223 13:34:22.284342   21132 kic.go:367] could not find the container embed-certs-035000 to remove it. will try anyways
	I0223 13:34:22.284419   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:22.338916   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	W0223 13:34:22.338959   21132 oci.go:84] error getting container status, will try to delete anyways: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:22.339038   21132 cli_runner.go:164] Run: docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0"
	W0223 13:34:22.393851   21132 cli_runner.go:211] docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:34:22.393879   21132 oci.go:641] error shutdown embed-certs-035000: docker exec --privileged -t embed-certs-035000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:23.396269   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:23.456521   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:23.456568   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:23.456577   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:23.456596   21132 retry.go:31] will retry after 297.182692ms: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:23.756157   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:23.814240   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:23.814288   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:23.814296   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:23.814317   21132 retry.go:31] will retry after 825.699811ms: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:24.640275   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:24.699018   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:24.699061   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:24.699070   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:24.699090   21132 retry.go:31] will retry after 880.37003ms: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:25.580633   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:25.637684   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:25.637725   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:25.637733   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:25.637753   21132 retry.go:31] will retry after 1.872174242s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:27.512232   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:27.572418   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:27.572464   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:27.572483   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:27.572510   21132 retry.go:31] will retry after 1.629368722s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:29.203149   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:29.262722   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:29.262765   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:29.262772   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:29.262792   21132 retry.go:31] will retry after 3.674068971s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:32.937139   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:32.992337   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:32.992388   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:32.992397   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:32.992418   21132 retry.go:31] will retry after 3.152486892s: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:36.147268   21132 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:36.208390   21132 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:36.208433   21132 oci.go:653] temporary error verifying shutdown: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:36.208441   21132 oci.go:655] temporary error: container embed-certs-035000 status is  but expect it to be exited
	I0223 13:34:36.208476   21132 oci.go:88] couldn't shut down embed-certs-035000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	 
	I0223 13:34:36.208561   21132 cli_runner.go:164] Run: docker rm -f -v embed-certs-035000
	I0223 13:34:36.266956   21132 cli_runner.go:164] Run: docker container inspect -f {{.Id}} embed-certs-035000
	W0223 13:34:36.321489   21132 cli_runner.go:211] docker container inspect -f {{.Id}} embed-certs-035000 returned with exit code 1
	I0223 13:34:36.321593   21132 cli_runner.go:164] Run: docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:36.376497   21132 cli_runner.go:164] Run: docker network rm embed-certs-035000
	W0223 13:34:36.477466   21132 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:34:36.477486   21132 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:34:37.478210   21132 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:34:37.500386   21132 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:34:37.500557   21132 start.go:159] libmachine.API.Create for "embed-certs-035000" (driver="docker")
	I0223 13:34:37.500598   21132 client.go:168] LocalClient.Create starting
	I0223 13:34:37.500825   21132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:34:37.500927   21132 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:37.500955   21132 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:37.501057   21132 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:34:37.501126   21132 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:37.501159   21132 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:37.522204   21132 cli_runner.go:164] Run: docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:34:37.580750   21132 cli_runner.go:211] docker network inspect embed-certs-035000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:34:37.580844   21132 network_create.go:281] running [docker network inspect embed-certs-035000] to gather additional debugging logs...
	I0223 13:34:37.580862   21132 cli_runner.go:164] Run: docker network inspect embed-certs-035000
	W0223 13:34:37.635517   21132 cli_runner.go:211] docker network inspect embed-certs-035000 returned with exit code 1
	I0223 13:34:37.635542   21132 network_create.go:284] error running [docker network inspect embed-certs-035000]: docker network inspect embed-certs-035000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: embed-certs-035000
	I0223 13:34:37.635553   21132 network_create.go:286] output of [docker network inspect embed-certs-035000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: embed-certs-035000
	
	** /stderr **
	I0223 13:34:37.635651   21132 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:37.692525   21132 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:37.694059   21132 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:37.695443   21132 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:37.696943   21132 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:37.698496   21132 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:37.700017   21132 network.go:212] skipping subnet 192.168.94.0/24 that is reserved: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:37.700545   21132 network.go:209] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017666e0}
	I0223 13:34:37.700577   21132 network_create.go:123] attempt to create docker network embed-certs-035000 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I0223 13:34:37.700665   21132 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-035000 embed-certs-035000
	I0223 13:34:37.787543   21132 network_create.go:107] docker network embed-certs-035000 192.168.103.0/24 created
	I0223 13:34:37.787573   21132 kic.go:117] calculated static IP "192.168.103.2" for the "embed-certs-035000" container
	I0223 13:34:37.787676   21132 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:34:37.844446   21132 cli_runner.go:164] Run: docker volume create embed-certs-035000 --label name.minikube.sigs.k8s.io=embed-certs-035000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:34:37.898183   21132 oci.go:103] Successfully created a docker volume embed-certs-035000
	I0223 13:34:37.898322   21132 cli_runner.go:164] Run: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:34:38.035969   21132 cli_runner.go:211] docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:34:38.036013   21132 client.go:171] LocalClient.Create took 535.405492ms
	I0223 13:34:40.036539   21132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:40.036656   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:40.094974   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:40.095060   21132 retry.go:31] will retry after 171.056883ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:40.268401   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:40.324008   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:40.324102   21132 retry.go:31] will retry after 544.119072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:40.869750   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:40.926516   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:40.926600   21132 retry.go:31] will retry after 425.0684ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:41.354014   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:41.412235   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:41.412323   21132 retry.go:31] will retry after 612.079455ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:42.026702   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:42.131917   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:34:42.132010   21132 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:42.132025   21132 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:42.132092   21132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:42.132149   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:42.187198   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:42.187289   21132 retry.go:31] will retry after 202.816957ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:42.391759   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:42.449364   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:42.449455   21132 retry.go:31] will retry after 445.313806ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:42.897228   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:42.956977   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:42.957061   21132 retry.go:31] will retry after 827.90247ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:43.785942   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:43.843781   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:34:43.843874   21132 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:43.843891   21132 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:43.843895   21132 start.go:128] duration metric: createHost completed in 6.365651668s
	I0223 13:34:43.843973   21132 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:43.844036   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:43.898639   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:43.898720   21132 retry.go:31] will retry after 288.087895ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:44.188108   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:44.245429   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:44.245512   21132 retry.go:31] will retry after 320.658482ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:44.568566   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:44.625738   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:44.625828   21132 retry.go:31] will retry after 645.762398ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:45.273990   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:45.331599   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:34:45.331699   21132 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:45.331711   21132 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:45.331770   21132 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:45.331823   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:45.387025   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:45.387115   21132 retry.go:31] will retry after 158.45415ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:45.546151   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:45.608144   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:45.608240   21132 retry.go:31] will retry after 292.500974ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:45.902289   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:45.960507   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	I0223 13:34:45.960591   21132 retry.go:31] will retry after 821.661036ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:46.784666   21132 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000
	W0223 13:34:46.840186   21132 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000 returned with exit code 1
	W0223 13:34:46.840276   21132 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:46.840289   21132 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "embed-certs-035000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-035000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	I0223 13:34:46.840302   21132 fix.go:57] fixHost completed within 24.841499363s
	I0223 13:34:46.840309   21132 start.go:83] releasing machines lock for "embed-certs-035000", held for 24.841562061s
	W0223 13:34:46.840458   21132 out.go:239] * Failed to start docker container. Running "minikube delete -p embed-certs-035000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p embed-certs-035000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:34:46.883553   21132 out.go:177] 
	W0223 13:34:46.905092   21132 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for embed-certs-035000 container: docker run --rm --name embed-certs-035000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-035000 --entrypoint /usr/bin/test -v embed-certs-035000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:34:46.905125   21132 out.go:239] * 
	* 
	W0223 13:34:46.906481   21132 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:34:46.989907   21132 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p embed-certs-035000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.26.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "322e3bf5ea137792d9cc02fd4274dfad339f0817d43e1ab0c924f44866eb8575",
	        "Created": "2023-02-23T21:34:37.751029554Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (100.335041ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:47.237570   21575 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/SecondStart (58.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (100.421866ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:00.700849   21212 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-571000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "2da02a4cb38be8fbd53edbe78461b2a8a58de1dc7405d6775b86167bbd5fdb3f",
	        "Created": "2023-02-23T21:33:38.708852464Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.67.0/24",
	                    "Gateway": "192.168.67.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (101.399009ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:01.127092   21225 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-571000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p default-k8s-diff-port-571000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1: exit status 80 (59.257213689s)

                                                
                                                
-- stdout --
	* [default-k8s-diff-port-571000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node default-k8s-diff-port-571000 in cluster default-k8s-diff-port-571000
	* Pulling base image ...
	* docker "default-k8s-diff-port-571000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "default-k8s-diff-port-571000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:34:01.170751   21230 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:34:01.170917   21230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:34:01.170922   21230 out.go:309] Setting ErrFile to fd 2...
	I0223 13:34:01.170926   21230 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:34:01.171037   21230 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:34:01.172343   21230 out.go:303] Setting JSON to false
	I0223 13:34:01.190749   21230 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3816,"bootTime":1677184225,"procs":392,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:34:01.190836   21230 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:34:01.212909   21230 out.go:177] * [default-k8s-diff-port-571000] minikube v1.29.0 on Darwin 13.2
	I0223 13:34:01.255090   21230 notify.go:220] Checking for updates...
	I0223 13:34:01.276756   21230 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:34:01.298032   21230 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:34:01.318981   21230 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:34:01.339938   21230 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:34:01.361045   21230 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:34:01.381841   21230 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:34:01.403476   21230 config.go:182] Loaded profile config "default-k8s-diff-port-571000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:34:01.404130   21230 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:34:01.465983   21230 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:34:01.466151   21230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:34:01.608692   21230 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:34:01.516979043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:34:01.652454   21230 out.go:177] * Using the docker driver based on existing profile
	I0223 13:34:01.674293   21230 start.go:296] selected driver: docker
	I0223 13:34:01.674320   21230 start.go:857] validating driver "docker" against &{Name:default-k8s-diff-port-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-571000 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:34:01.674465   21230 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:34:01.678301   21230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:34:01.818891   21230 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:34:01.727996856 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:34:01.819062   21230 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 13:34:01.819081   21230 cni.go:84] Creating CNI manager for ""
	I0223 13:34:01.819094   21230 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:34:01.819104   21230 start_flags.go:319] config:
	{Name:default-k8s-diff-port-571000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:default-k8s-diff-port-571000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:34:01.840989   21230 out.go:177] * Starting control plane node default-k8s-diff-port-571000 in cluster default-k8s-diff-port-571000
	I0223 13:34:01.862911   21230 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:34:01.884601   21230 out.go:177] * Pulling base image ...
	I0223 13:34:01.926687   21230 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:34:01.926721   21230 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:34:01.926811   21230 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:34:01.926848   21230 cache.go:57] Caching tarball of preloaded images
	I0223 13:34:01.927578   21230 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:34:01.927743   21230 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:34:01.928239   21230 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/default-k8s-diff-port-571000/config.json ...
	I0223 13:34:01.983903   21230 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:34:01.983920   21230 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:34:01.983940   21230 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:34:01.983987   21230 start.go:364] acquiring machines lock for default-k8s-diff-port-571000: {Name:mk040bb7b39c6c5d5f1dfd7a7376050165aac48b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:34:01.984084   21230 start.go:368] acquired machines lock for "default-k8s-diff-port-571000" in 75.402µs
	I0223 13:34:01.984112   21230 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:34:01.984121   21230 fix.go:55] fixHost starting: 
	I0223 13:34:01.984351   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:02.038129   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:02.038182   21230 fix.go:103] recreateIfNeeded on default-k8s-diff-port-571000: state= err=unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:02.038203   21230 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:34:02.081955   21230 out.go:177] * docker "default-k8s-diff-port-571000" container is missing, will recreate.
	I0223 13:34:02.104019   21230 delete.go:124] DEMOLISHING default-k8s-diff-port-571000 ...
	I0223 13:34:02.104224   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:02.160280   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:34:02.160319   21230 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:02.160339   21230 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:02.160745   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:02.216521   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:02.216574   21230 delete.go:82] Unable to get host status for default-k8s-diff-port-571000, assuming it has already been deleted: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:02.216674   21230 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-571000
	W0223 13:34:02.270661   21230 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:02.270691   21230 kic.go:367] could not find the container default-k8s-diff-port-571000 to remove it. will try anyways
	I0223 13:34:02.270782   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:02.324491   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:34:02.324553   21230 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:02.324632   21230 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0"
	W0223 13:34:02.379573   21230 cli_runner.go:211] docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:34:02.379602   21230 oci.go:641] error shutdown default-k8s-diff-port-571000: docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:03.382042   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:03.439992   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:03.440035   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:03.440050   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:03.440104   21230 retry.go:31] will retry after 508.766519ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:03.951301   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:04.012152   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:04.012205   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:04.012213   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:04.012234   21230 retry.go:31] will retry after 772.649956ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:04.786506   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:04.843919   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:04.843964   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:04.843972   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:04.844008   21230 retry.go:31] will retry after 1.66231561s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:06.506744   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:06.561325   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:06.561367   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:06.561374   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:06.561394   21230 retry.go:31] will retry after 938.511867ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:07.500166   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:07.576313   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:07.576358   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:07.576367   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:07.576385   21230 retry.go:31] will retry after 3.445110744s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:11.022453   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:11.083790   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:11.083834   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:11.083843   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:11.083862   21230 retry.go:31] will retry after 3.473801591s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:14.557922   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:14.612589   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:14.612631   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:14.612638   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:14.612656   21230 retry.go:31] will retry after 4.172844401s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:18.786195   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:18.842951   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:18.842993   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:18.843000   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:18.843025   21230 oci.go:88] couldn't shut down default-k8s-diff-port-571000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	 
	I0223 13:34:18.843100   21230 cli_runner.go:164] Run: docker rm -f -v default-k8s-diff-port-571000
	I0223 13:34:18.900362   21230 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-571000
	W0223 13:34:18.954316   21230 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:18.954421   21230 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:19.009723   21230 cli_runner.go:164] Run: docker network rm default-k8s-diff-port-571000
	W0223 13:34:19.125357   21230 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:34:19.125375   21230 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:34:20.126851   21230 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:34:20.149240   21230 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:34:20.149468   21230 start.go:159] libmachine.API.Create for "default-k8s-diff-port-571000" (driver="docker")
	I0223 13:34:20.149520   21230 client.go:168] LocalClient.Create starting
	I0223 13:34:20.149732   21230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:34:20.149831   21230 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:20.149868   21230 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:20.149995   21230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:34:20.150065   21230 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:20.150085   21230 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:20.171446   21230 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:34:20.230738   21230 cli_runner.go:211] docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:34:20.230834   21230 network_create.go:281] running [docker network inspect default-k8s-diff-port-571000] to gather additional debugging logs...
	I0223 13:34:20.230852   21230 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000
	W0223 13:34:20.286050   21230 cli_runner.go:211] docker network inspect default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:20.286073   21230 network_create.go:284] error running [docker network inspect default-k8s-diff-port-571000]: docker network inspect default-k8s-diff-port-571000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-571000
	I0223 13:34:20.286086   21230 network_create.go:286] output of [docker network inspect default-k8s-diff-port-571000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-571000
	
	** /stderr **
	I0223 13:34:20.286175   21230 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:20.341904   21230 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:20.343393   21230 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:20.344881   21230 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:20.346168   21230 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:20.347523   21230 network.go:212] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:20.347820   21230 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0003ab010}
	I0223 13:34:20.347833   21230 network_create.go:123] attempt to create docker network default-k8s-diff-port-571000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I0223 13:34:20.347903   21230 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000
	I0223 13:34:20.438593   21230 network_create.go:107] docker network default-k8s-diff-port-571000 192.168.94.0/24 created
	I0223 13:34:20.438628   21230 kic.go:117] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-571000" container
	I0223 13:34:20.438722   21230 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:34:20.495406   21230 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-571000 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:34:20.550224   21230 oci.go:103] Successfully created a docker volume default-k8s-diff-port-571000
	I0223 13:34:20.550353   21230 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:34:20.691547   21230 cli_runner.go:211] docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:34:20.691592   21230 client.go:171] LocalClient.Create took 542.062551ms
	I0223 13:34:22.692610   21230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:22.692726   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:22.749839   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:22.749932   21230 retry.go:31] will retry after 245.174838ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:22.997412   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:23.058164   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:23.058259   21230 retry.go:31] will retry after 341.20343ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:23.401160   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:23.458256   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:23.458336   21230 retry.go:31] will retry after 753.610799ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:24.213092   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:24.272317   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:34:24.272420   21230 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:34:24.272436   21230 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:24.272498   21230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:24.272555   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:24.328153   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:24.328239   21230 retry.go:31] will retry after 270.071854ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:24.598707   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:24.659238   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:24.659329   21230 retry.go:31] will retry after 497.104323ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:25.158470   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:25.216576   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:25.216667   21230 retry.go:31] will retry after 835.496479ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:26.054130   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:26.113684   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:34:26.113787   21230 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:34:26.113805   21230 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:26.113811   21230 start.go:128] duration metric: createHost completed in 5.986849299s
	I0223 13:34:26.113889   21230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:26.113947   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:26.168402   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:26.168483   21230 retry.go:31] will retry after 308.501305ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:26.479300   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:26.537058   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:26.537168   21230 retry.go:31] will retry after 188.586307ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:26.727674   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:26.787545   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:26.787642   21230 retry.go:31] will retry after 304.535159ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:27.094528   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:27.153321   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:27.153402   21230 retry.go:31] will retry after 891.674314ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:28.046877   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:28.102302   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:34:28.102398   21230 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:34:28.102420   21230 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:28.102487   21230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:28.102539   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:28.157128   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:28.157218   21230 retry.go:31] will retry after 304.771456ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:28.463337   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:28.523150   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:28.523235   21230 retry.go:31] will retry after 192.630737ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:28.718301   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:28.780052   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:28.780136   21230 retry.go:31] will retry after 494.392281ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:29.276932   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:29.334326   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:34:29.334420   21230 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:34:29.334435   21230 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:29.334440   21230 fix.go:57] fixHost completed within 27.350270441s
	I0223 13:34:29.334447   21230 start.go:83] releasing machines lock for "default-k8s-diff-port-571000", held for 27.350306416s
	W0223 13:34:29.334475   21230 start.go:691] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W0223 13:34:29.334599   21230 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:34:29.334607   21230 start.go:706] Will try again in 5 seconds ...
	I0223 13:34:34.335471   21230 start.go:364] acquiring machines lock for default-k8s-diff-port-571000: {Name:mk040bb7b39c6c5d5f1dfd7a7376050165aac48b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:34:34.335652   21230 start.go:368] acquired machines lock for "default-k8s-diff-port-571000" in 144.602µs
	I0223 13:34:34.335697   21230 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:34:34.335704   21230 fix.go:55] fixHost starting: 
	I0223 13:34:34.336131   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:34.397198   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:34.397243   21230 fix.go:103] recreateIfNeeded on default-k8s-diff-port-571000: state= err=unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:34.397253   21230 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:34:34.419335   21230 out.go:177] * docker "default-k8s-diff-port-571000" container is missing, will recreate.
	I0223 13:34:34.463207   21230 delete.go:124] DEMOLISHING default-k8s-diff-port-571000 ...
	I0223 13:34:34.463447   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:34.519618   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:34:34.519662   21230 stop.go:75] unable to get state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:34.519676   21230 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:34.520068   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:34.574352   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:34.574399   21230 delete.go:82] Unable to get host status for default-k8s-diff-port-571000, assuming it has already been deleted: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:34.574484   21230 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-571000
	W0223 13:34:34.628411   21230 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:34.628439   21230 kic.go:367] could not find the container default-k8s-diff-port-571000 to remove it. will try anyways
	I0223 13:34:34.628516   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:34.683583   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	W0223 13:34:34.683626   21230 oci.go:84] error getting container status, will try to delete anyways: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:34.683705   21230 cli_runner.go:164] Run: docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0"
	W0223 13:34:34.737419   21230 cli_runner.go:211] docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:34:34.737446   21230 oci.go:641] error shutdown default-k8s-diff-port-571000: docker exec --privileged -t default-k8s-diff-port-571000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:35.737747   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:35.796719   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:35.796761   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:35.796769   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:35.796788   21230 retry.go:31] will retry after 516.063895ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:36.314121   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:36.369745   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:36.369808   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:36.369818   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:36.369837   21230 retry.go:31] will retry after 761.501229ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:37.131737   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:37.191579   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:37.191625   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:37.191633   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:37.191655   21230 retry.go:31] will retry after 1.223942658s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:38.416443   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:38.476854   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:38.476897   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:38.476906   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:38.476925   21230 retry.go:31] will retry after 990.602512ms: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:39.469890   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:39.527241   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:39.527286   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:39.527294   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:39.527315   21230 retry.go:31] will retry after 2.512398473s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:42.039860   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:42.132862   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:42.132899   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:42.132905   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:42.132931   21230 retry.go:31] will retry after 4.880931132s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:47.014277   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:47.130505   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:47.130558   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:47.130574   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:47.130601   21230 retry.go:31] will retry after 3.268570999s: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:50.400327   21230 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:34:50.552377   21230 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:50.552434   21230 oci.go:653] temporary error verifying shutdown: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:50.552445   21230 oci.go:655] temporary error: container default-k8s-diff-port-571000 status is  but expect it to be exited
	I0223 13:34:50.552478   21230 oci.go:88] couldn't shut down default-k8s-diff-port-571000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	 
	I0223 13:34:50.552565   21230 cli_runner.go:164] Run: docker rm -f -v default-k8s-diff-port-571000
	I0223 13:34:50.610816   21230 cli_runner.go:164] Run: docker container inspect -f {{.Id}} default-k8s-diff-port-571000
	W0223 13:34:50.666003   21230 cli_runner.go:211] docker container inspect -f {{.Id}} default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:50.666104   21230 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:50.722398   21230 cli_runner.go:164] Run: docker network rm default-k8s-diff-port-571000
	W0223 13:34:50.828289   21230 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:34:50.828306   21230 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:34:51.830548   21230 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:34:51.856756   21230 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:34:51.856971   21230 start.go:159] libmachine.API.Create for "default-k8s-diff-port-571000" (driver="docker")
	I0223 13:34:51.857015   21230 client.go:168] LocalClient.Create starting
	I0223 13:34:51.857181   21230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:34:51.857269   21230 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:51.857292   21230 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:51.857396   21230 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:34:51.857479   21230 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:51.857497   21230 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:51.877977   21230 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:34:51.935217   21230 cli_runner.go:211] docker network inspect default-k8s-diff-port-571000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:34:51.935315   21230 network_create.go:281] running [docker network inspect default-k8s-diff-port-571000] to gather additional debugging logs...
	I0223 13:34:51.935329   21230 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-571000
	W0223 13:34:51.989555   21230 cli_runner.go:211] docker network inspect default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:51.989578   21230 network_create.go:284] error running [docker network inspect default-k8s-diff-port-571000]: docker network inspect default-k8s-diff-port-571000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-571000
	I0223 13:34:51.989591   21230 network_create.go:286] output of [docker network inspect default-k8s-diff-port-571000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-571000
	
	** /stderr **
	I0223 13:34:51.989672   21230 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:52.045391   21230 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:52.046869   21230 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:52.048424   21230 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:52.048746   21230 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00124ee20}
	I0223 13:34:52.048758   21230 network_create.go:123] attempt to create docker network default-k8s-diff-port-571000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:34:52.048850   21230 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000
	W0223 13:34:52.103023   21230 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:34:52.103050   21230 network_create.go:148] failed to create docker network default-k8s-diff-port-571000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:34:52.103063   21230 network_create.go:115] failed to create docker network default-k8s-diff-port-571000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:34:52.104386   21230 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:52.104725   21230 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001751450}
	I0223 13:34:52.104738   21230 network_create.go:123] attempt to create docker network default-k8s-diff-port-571000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:34:52.104807   21230 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 default-k8s-diff-port-571000
	I0223 13:34:52.191257   21230 network_create.go:107] docker network default-k8s-diff-port-571000 192.168.85.0/24 created
	I0223 13:34:52.191305   21230 kic.go:117] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-571000" container
	I0223 13:34:52.191423   21230 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:34:52.249583   21230 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-571000 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:34:52.303853   21230 oci.go:103] Successfully created a docker volume default-k8s-diff-port-571000
	I0223 13:34:52.303991   21230 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:34:52.448601   21230 cli_runner.go:211] docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:34:52.448639   21230 client.go:171] LocalClient.Create took 591.614973ms
	I0223 13:34:54.451032   21230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:54.451253   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:54.510243   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:54.510344   21230 retry.go:31] will retry after 243.769289ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:54.754635   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:54.811656   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:54.811746   21230 retry.go:31] will retry after 538.5199ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:55.352297   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:55.411096   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:55.411193   21230 retry.go:31] will retry after 690.300072ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:56.103840   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:56.162799   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:34:56.162902   21230 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:34:56.162927   21230 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:56.162985   21230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:56.163042   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:56.218149   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:56.218251   21230 retry.go:31] will retry after 143.110694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:56.362928   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:56.420568   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:56.420640   21230 retry.go:31] will retry after 248.617437ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:56.669969   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:56.729255   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:56.729353   21230 retry.go:31] will retry after 724.734231ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:57.456182   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:57.514921   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:34:57.515029   21230 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:34:57.515049   21230 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:57.515056   21230 start.go:128] duration metric: createHost completed in 5.684448307s
	I0223 13:34:57.515127   21230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:57.515179   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:57.568852   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:57.568948   21230 retry.go:31] will retry after 276.589906ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:57.845877   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:57.905472   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:57.905560   21230 retry.go:31] will retry after 219.409723ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:58.127394   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:58.185662   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:58.185757   21230 retry.go:31] will retry after 508.017385ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:58.696192   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:58.754788   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:34:58.754888   21230 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:34:58.754915   21230 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:58.754972   21230 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:58.755016   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:58.810388   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:58.810471   21230 retry.go:31] will retry after 245.746358ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:59.058584   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:59.117226   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:59.117307   21230 retry.go:31] will retry after 283.281108ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:34:59.401069   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:34:59.462258   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	I0223 13:34:59.462352   21230 retry.go:31] will retry after 692.000104ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:35:00.156791   21230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000
	W0223 13:35:00.212896   21230 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000 returned with exit code 1
	W0223 13:35:00.213010   21230 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:35:00.213028   21230 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "default-k8s-diff-port-571000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-571000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	I0223 13:35:00.213034   21230 fix.go:57] fixHost completed within 25.877284547s
	I0223 13:35:00.213041   21230 start.go:83] releasing machines lock for "default-k8s-diff-port-571000", held for 25.877329845s
	W0223 13:35:00.213214   21230 out.go:239] * Failed to start docker container. Running "minikube delete -p default-k8s-diff-port-571000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p default-k8s-diff-port-571000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:35:00.254911   21230 out.go:177] 
	W0223 13:35:00.277523   21230 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for default-k8s-diff-port-571000 container: docker run --rm --name default-k8s-diff-port-571000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-571000 --entrypoint /usr/bin/test -v default-k8s-diff-port-571000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:35:00.277549   21230 out.go:239] * 
	* 
	W0223 13:35:00.278855   21230 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:35:00.362279   21230 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p default-k8s-diff-port-571000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.26.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "413f83b086a5c16020df9d0ef2cc3c08d7b729317f6dbbc0627f39b294a498d9",
	        "Created": "2023-02-23T21:34:52.155241485Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (102.887468ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:00.564157   21785 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (59.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-035000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "322e3bf5ea137792d9cc02fd4274dfad339f0817d43e1ab0c924f44866eb8575",
	        "Created": "2023-02-23T21:34:37.751029554Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (99.511935ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:47.396030   21581 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "embed-certs-035000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-035000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-035000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (34.260801ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-035000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-035000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "322e3bf5ea137792d9cc02fd4274dfad339f0817d43e1ab0c924f44866eb8575",
	        "Created": "2023-02-23T21:34:37.751029554Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (100.686376ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:47.591244   21588 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-035000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p embed-certs-035000 "sudo crictl images -o json": exit status 80 (192.135559ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p embed-certs-035000 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.26.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.6-0",
- 	"registry.k8s.io/kube-apiserver:v1.26.1",
- 	"registry.k8s.io/kube-controller-manager:v1.26.1",
- 	"registry.k8s.io/kube-proxy:v1.26.1",
- 	"registry.k8s.io/kube-scheduler:v1.26.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "322e3bf5ea137792d9cc02fd4274dfad339f0817d43e1ab0c924f44866eb8575",
	        "Created": "2023-02-23T21:34:37.751029554Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (99.699721ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:47.943320   21598 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-035000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p embed-certs-035000 --alsologtostderr -v=1: exit status 80 (192.613569ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:34:47.987947   21602 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:34:47.988123   21602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:34:47.988128   21602 out.go:309] Setting ErrFile to fd 2...
	I0223 13:34:47.988132   21602 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:34:47.988236   21602 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:34:47.988558   21602 out.go:303] Setting JSON to false
	I0223 13:34:47.988579   21602 mustload.go:65] Loading cluster: embed-certs-035000
	I0223 13:34:47.988833   21602 config.go:182] Loaded profile config "embed-certs-035000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:34:47.989214   21602 cli_runner.go:164] Run: docker container inspect embed-certs-035000 --format={{.State.Status}}
	W0223 13:34:48.043577   21602 cli_runner.go:211] docker container inspect embed-certs-035000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:48.065999   21602 out.go:177] 
	W0223 13:34:48.087505   21602 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	X Exiting due to GUEST_STATUS: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000
	
	W0223 13:34:48.087538   21602 out.go:239] * 
	* 
	W0223 13:34:48.092317   21602 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:34:48.113496   21602 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p embed-certs-035000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "322e3bf5ea137792d9cc02fd4274dfad339f0817d43e1ab0c924f44866eb8575",
	        "Created": "2023-02-23T21:34:37.751029554Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (100.801615ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:48.296121   21608 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect embed-certs-035000
helpers_test.go:235: (dbg) docker inspect embed-certs-035000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "embed-certs-035000",
	        "Id": "322e3bf5ea137792d9cc02fd4274dfad339f0817d43e1ab0c924f44866eb8575",
	        "Created": "2023-02-23T21:34:37.751029554Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.103.0/24",
	                    "Gateway": "192.168.103.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "embed-certs-035000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-035000 -n embed-certs-035000: exit status 7 (99.752118ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:34:48.453705   21614 status.go:249] status error: host: state: unknown state "embed-certs-035000": docker container inspect embed-certs-035000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: embed-certs-035000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "embed-certs-035000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (0.51s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-767000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-767000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: exit status 80 (41.387851279s)

                                                
                                                
-- stdout --
	* [newest-cni-767000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node newest-cni-767000 in cluster newest-cni-767000
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-767000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:34:49.724581   21651 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:34:49.724731   21651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:34:49.724737   21651 out.go:309] Setting ErrFile to fd 2...
	I0223 13:34:49.724741   21651 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:34:49.724851   21651 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:34:49.726292   21651 out.go:303] Setting JSON to false
	I0223 13:34:49.745067   21651 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3864,"bootTime":1677184225,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:34:49.745150   21651 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:34:49.766818   21651 out.go:177] * [newest-cni-767000] minikube v1.29.0 on Darwin 13.2
	I0223 13:34:49.810605   21651 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:34:49.810593   21651 notify.go:220] Checking for updates...
	I0223 13:34:49.854569   21651 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:34:49.875331   21651 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:34:49.896632   21651 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:34:49.918690   21651 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:34:49.942498   21651 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:34:49.964059   21651 config.go:182] Loaded profile config "cert-expiration-946000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:34:49.964187   21651 config.go:182] Loaded profile config "default-k8s-diff-port-571000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:34:49.964298   21651 config.go:182] Loaded profile config "missing-upgrade-640000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.0
	I0223 13:34:49.964361   21651 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:34:50.026276   21651 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:34:50.026377   21651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:34:50.167323   21651 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:34:50.075802616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:34:50.210934   21651 out.go:177] * Using the docker driver based on user configuration
	I0223 13:34:50.232136   21651 start.go:296] selected driver: docker
	I0223 13:34:50.232162   21651 start.go:857] validating driver "docker" against <nil>
	I0223 13:34:50.232179   21651 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:34:50.236127   21651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:34:50.377020   21651 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:34:50.285703035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:34:50.377121   21651 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	W0223 13:34:50.377141   21651 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0223 13:34:50.377324   21651 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0223 13:34:50.399057   21651 out.go:177] * Using Docker Desktop driver with root privileges
	I0223 13:34:50.420658   21651 cni.go:84] Creating CNI manager for ""
	I0223 13:34:50.420702   21651 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:34:50.420711   21651 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 13:34:50.420721   21651 start_flags.go:319] config:
	{Name:newest-cni-767000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-767000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:34:50.462721   21651 out.go:177] * Starting control plane node newest-cni-767000 in cluster newest-cni-767000
	I0223 13:34:50.483825   21651 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:34:50.505985   21651 out.go:177] * Pulling base image ...
	I0223 13:34:50.549050   21651 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:34:50.549157   21651 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:34:50.549157   21651 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:34:50.549183   21651 cache.go:57] Caching tarball of preloaded images
	I0223 13:34:50.549425   21651 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:34:50.549445   21651 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:34:50.550325   21651 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/newest-cni-767000/config.json ...
	I0223 13:34:50.550502   21651 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/newest-cni-767000/config.json: {Name:mkb3e9b17d950414f410be1f4cbc20d6225f71c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 13:34:50.610842   21651 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:34:50.610897   21651 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:34:50.610921   21651 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:34:50.610986   21651 start.go:364] acquiring machines lock for newest-cni-767000: {Name:mka7b360626537fa2584605db7207cfd3caf5aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:34:50.611215   21651 start.go:368] acquired machines lock for "newest-cni-767000" in 213.046µs
	I0223 13:34:50.611261   21651 start.go:93] Provisioning new machine with config: &{Name:newest-cni-767000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-767000 Namespace:default APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:} &{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0223 13:34:50.611386   21651 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:34:50.653663   21651 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:34:50.653883   21651 start.go:159] libmachine.API.Create for "newest-cni-767000" (driver="docker")
	I0223 13:34:50.653908   21651 client.go:168] LocalClient.Create starting
	I0223 13:34:50.654020   21651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:34:50.654070   21651 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:50.654089   21651 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:50.654158   21651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:34:50.654194   21651 main.go:141] libmachine: Decoding PEM data...
	I0223 13:34:50.654207   21651 main.go:141] libmachine: Parsing certificate...
	I0223 13:34:50.654735   21651 cli_runner.go:164] Run: docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:34:50.711152   21651 cli_runner.go:211] docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:34:50.711261   21651 network_create.go:281] running [docker network inspect newest-cni-767000] to gather additional debugging logs...
	I0223 13:34:50.711280   21651 cli_runner.go:164] Run: docker network inspect newest-cni-767000
	W0223 13:34:50.767631   21651 cli_runner.go:211] docker network inspect newest-cni-767000 returned with exit code 1
	I0223 13:34:50.767677   21651 network_create.go:284] error running [docker network inspect newest-cni-767000]: docker network inspect newest-cni-767000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-767000
	I0223 13:34:50.767695   21651 network_create.go:286] output of [docker network inspect newest-cni-767000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-767000
	
	** /stderr **
	I0223 13:34:50.767799   21651 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:34:50.824894   21651 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:50.825235   21651 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00140b1b0}
	I0223 13:34:50.825251   21651 network_create.go:123] attempt to create docker network newest-cni-767000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:34:50.825320   21651 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000
	W0223 13:34:50.880398   21651 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000 returned with exit code 1
	W0223 13:34:50.880427   21651 network_create.go:148] failed to create docker network newest-cni-767000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:34:50.880453   21651 network_create.go:115] failed to create docker network newest-cni-767000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:34:50.881792   21651 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:34:50.882117   21651 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000616be0}
	I0223 13:34:50.882128   21651 network_create.go:123] attempt to create docker network newest-cni-767000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:34:50.882205   21651 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000
	I0223 13:34:50.968693   21651 network_create.go:107] docker network newest-cni-767000 192.168.67.0/24 created
	I0223 13:34:50.968724   21651 kic.go:117] calculated static IP "192.168.67.2" for the "newest-cni-767000" container
	I0223 13:34:50.968840   21651 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:34:51.026227   21651 cli_runner.go:164] Run: docker volume create newest-cni-767000 --label name.minikube.sigs.k8s.io=newest-cni-767000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:34:51.081350   21651 oci.go:103] Successfully created a docker volume newest-cni-767000
	I0223 13:34:51.081491   21651 cli_runner.go:164] Run: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:34:51.298710   21651 cli_runner.go:211] docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:34:51.298760   21651 client.go:171] LocalClient.Create took 644.843597ms
	I0223 13:34:53.301221   21651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:34:53.301370   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:53.361894   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:34:53.362024   21651 retry.go:31] will retry after 342.097775ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:53.706572   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:53.767015   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:34:53.767116   21651 retry.go:31] will retry after 283.130726ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:54.051602   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:54.110839   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:34:54.110927   21651 retry.go:31] will retry after 497.031168ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:54.610307   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:54.666262   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:34:54.666361   21651 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:34:54.666382   21651 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:54.666446   21651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:34:54.666495   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:54.720700   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:34:54.720785   21651 retry.go:31] will retry after 239.999674ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:54.961969   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:55.018648   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:34:55.018736   21651 retry.go:31] will retry after 440.481993ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:55.459620   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:55.515638   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:34:55.515731   21651 retry.go:31] will retry after 330.548051ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:55.848674   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:55.908235   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:34:55.908327   21651 retry.go:31] will retry after 457.238288ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:56.366951   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:34:56.420458   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:34:56.420553   21651 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:34:56.420568   21651 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:34:56.420574   21651 start.go:128] duration metric: createHost completed in 5.809172935s
	I0223 13:34:56.420580   21651 start.go:83] releasing machines lock for "newest-cni-767000", held for 5.809345654s
	W0223 13:34:56.420596   21651 start.go:691] error starting host: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	I0223 13:34:56.421042   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:34:56.474920   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:34:56.474974   21651 delete.go:82] Unable to get host status for newest-cni-767000, assuming it has already been deleted: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	W0223 13:34:56.475113   21651 out.go:239] ! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:34:56.475121   21651 start.go:706] Will try again in 5 seconds ...
	I0223 13:35:01.475250   21651 start.go:364] acquiring machines lock for newest-cni-767000: {Name:mka7b360626537fa2584605db7207cfd3caf5aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:35:01.475348   21651 start.go:368] acquired machines lock for "newest-cni-767000" in 74.599µs
	I0223 13:35:01.475367   21651 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:35:01.475383   21651 fix.go:55] fixHost starting: 
	I0223 13:35:01.475621   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:01.532804   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:01.532849   21651 fix.go:103] recreateIfNeeded on newest-cni-767000: state= err=unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:01.532866   21651 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:35:01.554385   21651 out.go:177] * docker "newest-cni-767000" container is missing, will recreate.
	I0223 13:35:01.596166   21651 delete.go:124] DEMOLISHING newest-cni-767000 ...
	I0223 13:35:01.596289   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:01.651690   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:01.651741   21651 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:01.651755   21651 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:01.652138   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:01.707143   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:01.707188   21651 delete.go:82] Unable to get host status for newest-cni-767000, assuming it has already been deleted: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:01.707265   21651 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-767000
	W0223 13:35:01.762826   21651 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-767000 returned with exit code 1
	I0223 13:35:01.762857   21651 kic.go:367] could not find the container newest-cni-767000 to remove it. will try anyways
	I0223 13:35:01.762931   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:01.818086   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:01.818125   21651 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:01.818211   21651 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0"
	W0223 13:35:01.874096   21651 cli_runner.go:211] docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:35:01.874132   21651 oci.go:641] error shutdown newest-cni-767000: docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:02.874370   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:02.930278   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:02.930329   21651 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:02.930342   21651 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:02.930366   21651 retry.go:31] will retry after 356.840806ms: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:03.289498   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:03.348904   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:03.348955   21651 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:03.348964   21651 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:03.348982   21651 retry.go:31] will retry after 884.039224ms: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:04.235428   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:04.294541   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:04.294585   21651 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:04.294594   21651 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:04.294615   21651 retry.go:31] will retry after 1.552260743s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:05.849275   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:05.909346   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:05.909388   21651 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:05.909400   21651 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:05.909420   21651 retry.go:31] will retry after 966.951379ms: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:06.878777   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:06.936908   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:06.936954   21651 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:06.936961   21651 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:06.936982   21651 retry.go:31] will retry after 1.790902995s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:08.729048   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:08.788257   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:08.788301   21651 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:08.788310   21651 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:08.788330   21651 retry.go:31] will retry after 4.47878034s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:13.268472   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:13.325734   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:13.325777   21651 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:13.325784   21651 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:13.325817   21651 retry.go:31] will retry after 7.164479882s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:20.492756   21651 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:20.551722   21651 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:20.551764   21651 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:20.551773   21651 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:20.551798   21651 oci.go:88] couldn't shut down newest-cni-767000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	 
	I0223 13:35:20.551872   21651 cli_runner.go:164] Run: docker rm -f -v newest-cni-767000
	I0223 13:35:20.610388   21651 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-767000
	W0223 13:35:20.665653   21651 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-767000 returned with exit code 1
	I0223 13:35:20.665762   21651 cli_runner.go:164] Run: docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:35:20.720536   21651 cli_runner.go:164] Run: docker network rm newest-cni-767000
	W0223 13:35:20.825883   21651 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:35:20.825913   21651 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:35:21.827224   21651 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:35:21.851037   21651 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:35:21.851274   21651 start.go:159] libmachine.API.Create for "newest-cni-767000" (driver="docker")
	I0223 13:35:21.851344   21651 client.go:168] LocalClient.Create starting
	I0223 13:35:21.851528   21651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:35:21.851628   21651 main.go:141] libmachine: Decoding PEM data...
	I0223 13:35:21.851656   21651 main.go:141] libmachine: Parsing certificate...
	I0223 13:35:21.851752   21651 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:35:21.851815   21651 main.go:141] libmachine: Decoding PEM data...
	I0223 13:35:21.851832   21651 main.go:141] libmachine: Parsing certificate...
	I0223 13:35:21.871507   21651 cli_runner.go:164] Run: docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:35:21.928348   21651 cli_runner.go:211] docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:35:21.928435   21651 network_create.go:281] running [docker network inspect newest-cni-767000] to gather additional debugging logs...
	I0223 13:35:21.928453   21651 cli_runner.go:164] Run: docker network inspect newest-cni-767000
	W0223 13:35:21.984138   21651 cli_runner.go:211] docker network inspect newest-cni-767000 returned with exit code 1
	I0223 13:35:21.984162   21651 network_create.go:284] error running [docker network inspect newest-cni-767000]: docker network inspect newest-cni-767000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-767000
	I0223 13:35:21.984181   21651 network_create.go:286] output of [docker network inspect newest-cni-767000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-767000
	
	** /stderr **
	I0223 13:35:21.984261   21651 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:35:22.041555   21651 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:35:22.043087   21651 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:35:22.044613   21651 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:35:22.044996   21651 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ed0c60}
	I0223 13:35:22.045007   21651 network_create.go:123] attempt to create docker network newest-cni-767000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:35:22.045081   21651 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000
	W0223 13:35:22.099019   21651 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000 returned with exit code 1
	W0223 13:35:22.099050   21651 network_create.go:148] failed to create docker network newest-cni-767000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:35:22.099062   21651 network_create.go:115] failed to create docker network newest-cni-767000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:35:22.100384   21651 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:35:22.100725   21651 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001218500}
	I0223 13:35:22.100736   21651 network_create.go:123] attempt to create docker network newest-cni-767000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:35:22.100799   21651 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000
	I0223 13:35:22.188590   21651 network_create.go:107] docker network newest-cni-767000 192.168.85.0/24 created
	I0223 13:35:22.188619   21651 kic.go:117] calculated static IP "192.168.85.2" for the "newest-cni-767000" container
	I0223 13:35:22.188730   21651 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:35:22.245822   21651 cli_runner.go:164] Run: docker volume create newest-cni-767000 --label name.minikube.sigs.k8s.io=newest-cni-767000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:35:22.300670   21651 oci.go:103] Successfully created a docker volume newest-cni-767000
	I0223 13:35:22.300795   21651 cli_runner.go:164] Run: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:35:22.436942   21651 cli_runner.go:211] docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:35:22.436984   21651 client.go:171] LocalClient.Create took 585.628988ms
	I0223 13:35:24.439258   21651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:35:24.439495   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:24.500118   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:24.500205   21651 retry.go:31] will retry after 258.207285ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:24.759032   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:24.816807   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:24.816902   21651 retry.go:31] will retry after 203.376969ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:25.020951   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:25.080121   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:25.080224   21651 retry.go:31] will retry after 577.797672ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:25.660414   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:25.718951   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:35:25.719047   21651 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:35:25.719061   21651 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:25.719127   21651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:35:25.719183   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:25.772905   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:25.772991   21651 retry.go:31] will retry after 285.118034ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:26.060569   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:26.117343   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:26.117430   21651 retry.go:31] will retry after 255.576233ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:26.375408   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:26.436314   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:26.436416   21651 retry.go:31] will retry after 292.363589ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:26.729139   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:26.787779   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:26.787874   21651 retry.go:31] will retry after 633.817798ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:27.422040   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:27.484078   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:35:27.484179   21651 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:35:27.484194   21651 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:27.484200   21651 start.go:128] duration metric: createHost completed in 5.656943015s
	I0223 13:35:27.484281   21651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:35:27.484343   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:27.538306   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:27.538386   21651 retry.go:31] will retry after 218.993589ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:27.759646   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:27.816736   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:27.816810   21651 retry.go:31] will retry after 544.632026ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:28.363816   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:28.421711   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:28.421795   21651 retry.go:31] will retry after 455.669151ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:28.879865   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:28.937116   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:35:28.937209   21651 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:35:28.937226   21651 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:28.937284   21651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:35:28.937338   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:28.991480   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:28.991566   21651 retry.go:31] will retry after 132.899182ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:29.126729   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:29.185744   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:29.185835   21651 retry.go:31] will retry after 324.949277ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:29.511243   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:29.570994   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:29.571079   21651 retry.go:31] will retry after 308.031014ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:29.881526   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:29.938968   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:35:29.939053   21651 retry.go:31] will retry after 919.399229ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:30.858980   21651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:35:30.916446   21651 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:35:30.916545   21651 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:35:30.916561   21651 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:30.916571   21651 fix.go:57] fixHost completed within 29.441129281s
	I0223 13:35:30.916578   21651 start.go:83] releasing machines lock for "newest-cni-767000", held for 29.441170566s
	W0223 13:35:30.916726   21651 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-767000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-767000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:35:30.960413   21651 out.go:177] 
	W0223 13:35:30.982286   21651 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:35:30.982303   21651 out.go:239] * 
	* 
	W0223 13:35:30.983075   21651 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:35:31.046020   21651 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p newest-cni-767000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-767000
helpers_test.go:235: (dbg) docker inspect newest-cni-767000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-767000",
	        "Id": "04f3c15d5a7eb04a64ac81be857d441f55c4e3b3ea043022c88c663eee1a3785",
	        "Created": "2023-02-23T21:35:22.151697816Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-767000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000: exit status 7 (100.670394ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:31.239426   21996 status.go:249] status error: host: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-767000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (41.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-571000" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "413f83b086a5c16020df9d0ef2cc3c08d7b729317f6dbbc0627f39b294a498d9",
	        "Created": "2023-02-23T21:34:52.155241485Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (101.188213ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:00.723848   21791 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "default-k8s-diff-port-571000" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-571000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-571000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (34.792425ms)

                                                
                                                
** stderr ** 
	error: context "default-k8s-diff-port-571000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-571000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "413f83b086a5c16020df9d0ef2cc3c08d7b729317f6dbbc0627f39b294a498d9",
	        "Created": "2023-02-23T21:34:52.155241485Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (100.899464ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:00.918890   21799 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-571000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-571000 "sudo crictl images -o json": exit status 80 (191.749684ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-571000 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.26.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.6-0",
- 	"registry.k8s.io/kube-apiserver:v1.26.1",
- 	"registry.k8s.io/kube-controller-manager:v1.26.1",
- 	"registry.k8s.io/kube-proxy:v1.26.1",
- 	"registry.k8s.io/kube-scheduler:v1.26.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "413f83b086a5c16020df9d0ef2cc3c08d7b729317f6dbbc0627f39b294a498d9",
	        "Created": "2023-02-23T21:34:52.155241485Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (100.1233ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:01.269562   21811 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-571000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p default-k8s-diff-port-571000 --alsologtostderr -v=1: exit status 80 (191.268174ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:35:01.313826   21815 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:35:01.313997   21815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:35:01.314003   21815 out.go:309] Setting ErrFile to fd 2...
	I0223 13:35:01.314007   21815 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:35:01.314116   21815 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:35:01.314446   21815 out.go:303] Setting JSON to false
	I0223 13:35:01.314467   21815 mustload.go:65] Loading cluster: default-k8s-diff-port-571000
	I0223 13:35:01.314753   21815 config.go:182] Loaded profile config "default-k8s-diff-port-571000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:35:01.315149   21815 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}
	W0223 13:35:01.369271   21815 cli_runner.go:211] docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:01.391408   21815 out.go:177] 
	W0223 13:35:01.413095   21815 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	X Exiting due to GUEST_STATUS: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000
	
	W0223 13:35:01.413125   21815 out.go:239] * 
	* 
	W0223 13:35:01.418817   21815 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:35:01.439998   21815 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p default-k8s-diff-port-571000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "413f83b086a5c16020df9d0ef2cc3c08d7b729317f6dbbc0627f39b294a498d9",
	        "Created": "2023-02-23T21:34:52.155241485Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (111.845719ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:01.633056   21823 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect default-k8s-diff-port-571000
helpers_test.go:235: (dbg) docker inspect default-k8s-diff-port-571000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "default-k8s-diff-port-571000",
	        "Id": "413f83b086a5c16020df9d0ef2cc3c08d7b729317f6dbbc0627f39b294a498d9",
	        "Created": "2023-02-23T21:34:52.155241485Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "default-k8s-diff-port-571000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-571000 -n default-k8s-diff-port-571000: exit status 7 (102.964981ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:01.796510   21833 status.go:249] status error: host: state: unknown state "default-k8s-diff-port-571000": docker container inspect default-k8s-diff-port-571000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: default-k8s-diff-port-571000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-571000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (0.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-767000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-darwin-amd64 stop -p newest-cni-767000 --alsologtostderr -v=3: exit status 82 (13.128986702s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-767000"  ...
	* Stopping node "newest-cni-767000"  ...
	* Stopping node "newest-cni-767000"  ...
	* Stopping node "newest-cni-767000"  ...
	* Stopping node "newest-cni-767000"  ...
	* Stopping node "newest-cni-767000"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:35:31.509359   22004 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:35:31.509535   22004 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:35:31.509540   22004 out.go:309] Setting ErrFile to fd 2...
	I0223 13:35:31.509544   22004 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:35:31.509660   22004 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:35:31.509981   22004 out.go:303] Setting JSON to false
	I0223 13:35:31.510139   22004 mustload.go:65] Loading cluster: newest-cni-767000
	I0223 13:35:31.510422   22004 config.go:182] Loaded profile config "newest-cni-767000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:35:31.510488   22004 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/newest-cni-767000/config.json ...
	I0223 13:35:31.510763   22004 mustload.go:65] Loading cluster: newest-cni-767000
	I0223 13:35:31.510862   22004 config.go:182] Loaded profile config "newest-cni-767000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:35:31.510893   22004 stop.go:39] StopHost: newest-cni-767000
	I0223 13:35:31.532659   22004 out.go:177] * Stopping node "newest-cni-767000"  ...
	I0223 13:35:31.575555   22004 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:31.631077   22004 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:31.631148   22004 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	W0223 13:35:31.631171   22004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:31.631212   22004 retry.go:31] will retry after 1.450711077s: docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:33.082756   22004 stop.go:39] StopHost: newest-cni-767000
	I0223 13:35:33.106187   22004 out.go:177] * Stopping node "newest-cni-767000"  ...
	I0223 13:35:33.148050   22004 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:33.205500   22004 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:33.205537   22004 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	W0223 13:35:33.205550   22004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:33.205563   22004 retry.go:31] will retry after 1.427535256s: docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:34.633759   22004 stop.go:39] StopHost: newest-cni-767000
	I0223 13:35:34.656177   22004 out.go:177] * Stopping node "newest-cni-767000"  ...
	I0223 13:35:34.698786   22004 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:34.755146   22004 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:34.755194   22004 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	W0223 13:35:34.755207   22004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:34.755223   22004 retry.go:31] will retry after 1.165256285s: docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:35.922183   22004 stop.go:39] StopHost: newest-cni-767000
	I0223 13:35:35.946364   22004 out.go:177] * Stopping node "newest-cni-767000"  ...
	I0223 13:35:35.988575   22004 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:36.045589   22004 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:36.045630   22004 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	W0223 13:35:36.045640   22004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:36.045653   22004 retry.go:31] will retry after 3.241015832s: docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:39.288792   22004 stop.go:39] StopHost: newest-cni-767000
	I0223 13:35:39.311086   22004 out.go:177] * Stopping node "newest-cni-767000"  ...
	I0223 13:35:39.332876   22004 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:39.393500   22004 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:39.393540   22004 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	W0223 13:35:39.393554   22004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:39.393569   22004 retry.go:31] will retry after 4.927648466s: docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:44.321866   22004 stop.go:39] StopHost: newest-cni-767000
	I0223 13:35:44.344046   22004 out.go:177] * Stopping node "newest-cni-767000"  ...
	I0223 13:35:44.386952   22004 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:44.444047   22004 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:44.444087   22004 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	W0223 13:35:44.444099   22004 stop.go:163] stop host returned error: ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:44.465772   22004 out.go:177] 
	W0223 13:35:44.487495   22004 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-767000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	X Exiting due to GUEST_STOP_TIMEOUT: docker container inspect newest-cni-767000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:35:44.487510   22004 out.go:239] * 
	* 
	W0223 13:35:44.490698   22004 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:35:44.550409   22004 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-darwin-amd64 stop -p newest-cni-767000 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Stop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-767000
helpers_test.go:235: (dbg) docker inspect newest-cni-767000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-767000",
	        "Id": "04f3c15d5a7eb04a64ac81be857d441f55c4e3b3ea043022c88c663eee1a3785",
	        "Created": "2023-02-23T21:35:22.151697816Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-767000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000: exit status 7 (100.634766ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:44.753833   22033 status.go:249] status error: host: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-767000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (13.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000: exit status 7 (101.289928ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:44.855250   22037 status.go:249] status error: host: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Nonexistent"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-767000 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-767000
helpers_test.go:235: (dbg) docker inspect newest-cni-767000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-767000",
	        "Id": "04f3c15d5a7eb04a64ac81be857d441f55c4e3b3ea043022c88c663eee1a3785",
	        "Created": "2023-02-23T21:35:22.151697816Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-767000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000: exit status 7 (103.23018ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:35:45.285407   22049 status.go:249] status error: host: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-767000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (58.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-767000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1
E0223 13:36:09.936750    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 13:36:29.664038    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p newest-cni-767000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1: exit status 80 (58.746538793s)

                                                
                                                
-- stdout --
	* [newest-cni-767000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting control plane node newest-cni-767000 in cluster newest-cni-767000
	* Pulling base image ...
	* docker "newest-cni-767000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* docker "newest-cni-767000" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:35:45.330596   22053 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:35:45.330771   22053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:35:45.330776   22053 out.go:309] Setting ErrFile to fd 2...
	I0223 13:35:45.330780   22053 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:35:45.330891   22053 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:35:45.332209   22053 out.go:303] Setting JSON to false
	I0223 13:35:45.350785   22053 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3920,"bootTime":1677184225,"procs":400,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 13:35:45.350875   22053 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 13:35:45.373310   22053 out.go:177] * [newest-cni-767000] minikube v1.29.0 on Darwin 13.2
	I0223 13:35:45.415831   22053 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 13:35:45.415826   22053 notify.go:220] Checking for updates...
	I0223 13:35:45.437061   22053 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 13:35:45.458158   22053 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 13:35:45.479816   22053 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 13:35:45.501147   22053 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 13:35:45.543642   22053 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 13:35:45.565503   22053 config.go:182] Loaded profile config "newest-cni-767000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:35:45.566135   22053 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 13:35:45.627255   22053 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 13:35:45.627378   22053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:35:45.769721   22053 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:35:45.678231882 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:35:45.813122   22053 out.go:177] * Using the docker driver based on existing profile
	I0223 13:35:45.834307   22053 start.go:296] selected driver: docker
	I0223 13:35:45.834333   22053 start.go:857] validating driver "docker" against &{Name:newest-cni-767000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-767000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:35:45.834438   22053 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 13:35:45.838269   22053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 13:35:45.981595   22053 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 21:35:45.888706467 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:N/A Expected:N/A} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo
:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 Shadow
edPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 13:35:45.981761   22053 start_flags.go:938] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0223 13:35:45.981777   22053 cni.go:84] Creating CNI manager for ""
	I0223 13:35:45.981790   22053 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 13:35:45.981798   22053 start_flags.go:319] config:
	{Name:newest-cni-767000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:newest-cni-767000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true metrics-server:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 13:35:46.003613   22053 out.go:177] * Starting control plane node newest-cni-767000 in cluster newest-cni-767000
	I0223 13:35:46.025286   22053 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 13:35:46.047160   22053 out.go:177] * Pulling base image ...
	I0223 13:35:46.089390   22053 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 13:35:46.089447   22053 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 13:35:46.089481   22053 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 13:35:46.089499   22053 cache.go:57] Caching tarball of preloaded images
	I0223 13:35:46.089726   22053 preload.go:174] Found /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0223 13:35:46.089745   22053 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on docker
	I0223 13:35:46.090690   22053 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/newest-cni-767000/config.json ...
	I0223 13:35:46.146075   22053 image.go:81] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon, skipping pull
	I0223 13:35:46.146090   22053 cache.go:143] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in daemon, skipping load
	I0223 13:35:46.146117   22053 cache.go:193] Successfully downloaded all kic artifacts
	I0223 13:35:46.146152   22053 start.go:364] acquiring machines lock for newest-cni-767000: {Name:mka7b360626537fa2584605db7207cfd3caf5aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:35:46.146258   22053 start.go:368] acquired machines lock for "newest-cni-767000" in 86.378µs
	I0223 13:35:46.146283   22053 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:35:46.146291   22053 fix.go:55] fixHost starting: 
	I0223 13:35:46.146524   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:46.201500   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:46.201561   22053 fix.go:103] recreateIfNeeded on newest-cni-767000: state= err=unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:46.201582   22053 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:35:46.223422   22053 out.go:177] * docker "newest-cni-767000" container is missing, will recreate.
	I0223 13:35:46.245102   22053 delete.go:124] DEMOLISHING newest-cni-767000 ...
	I0223 13:35:46.245315   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:46.300918   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:46.300958   22053 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:46.300970   22053 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:46.301335   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:46.356346   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:46.356404   22053 delete.go:82] Unable to get host status for newest-cni-767000, assuming it has already been deleted: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:46.356477   22053 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-767000
	W0223 13:35:46.411036   22053 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-767000 returned with exit code 1
	I0223 13:35:46.411065   22053 kic.go:367] could not find the container newest-cni-767000 to remove it. will try anyways
	I0223 13:35:46.411142   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:46.464452   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:35:46.464498   22053 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:46.464567   22053 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0"
	W0223 13:35:46.518552   22053 cli_runner.go:211] docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:35:46.518582   22053 oci.go:641] error shutdown newest-cni-767000: docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:47.520941   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:47.582173   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:47.582216   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:47.582225   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:47.582281   22053 retry.go:31] will retry after 731.988739ms: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:48.315037   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:48.374988   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:48.375030   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:48.375039   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:48.375059   22053 retry.go:31] will retry after 673.692088ms: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:49.051126   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:49.107917   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:49.107958   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:49.107968   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:49.107992   22053 retry.go:31] will retry after 1.552698927s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:50.662171   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:50.718694   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:50.718736   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:50.718743   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:50.718769   22053 retry.go:31] will retry after 1.777384313s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:52.498447   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:52.558327   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:52.558370   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:52.558378   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:52.558397   22053 retry.go:31] will retry after 1.772932996s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:54.331982   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:54.391323   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:54.391366   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:54.391374   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:54.391393   22053 retry.go:31] will retry after 3.256221386s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:57.649160   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:35:57.710209   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:35:57.710257   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:35:57.710265   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:35:57.710282   22053 retry.go:31] will retry after 3.35986152s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:01.070283   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:01.127761   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:01.127796   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:01.127804   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:01.127826   22053 oci.go:88] couldn't shut down newest-cni-767000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	 
	I0223 13:36:01.127903   22053 cli_runner.go:164] Run: docker rm -f -v newest-cni-767000
	I0223 13:36:01.184174   22053 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-767000
	W0223 13:36:01.239328   22053 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-767000 returned with exit code 1
	I0223 13:36:01.239455   22053 cli_runner.go:164] Run: docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:36:01.294644   22053 cli_runner.go:164] Run: docker network rm newest-cni-767000
	W0223 13:36:01.396741   22053 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:36:01.396759   22053 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:36:02.398895   22053 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:36:02.420981   22053 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:36:02.421147   22053 start.go:159] libmachine.API.Create for "newest-cni-767000" (driver="docker")
	I0223 13:36:02.421221   22053 client.go:168] LocalClient.Create starting
	I0223 13:36:02.421438   22053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:36:02.421525   22053 main.go:141] libmachine: Decoding PEM data...
	I0223 13:36:02.421557   22053 main.go:141] libmachine: Parsing certificate...
	I0223 13:36:02.421675   22053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:36:02.421753   22053 main.go:141] libmachine: Decoding PEM data...
	I0223 13:36:02.421772   22053 main.go:141] libmachine: Parsing certificate...
	I0223 13:36:02.422532   22053 cli_runner.go:164] Run: docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:36:02.480513   22053 cli_runner.go:211] docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:36:02.480603   22053 network_create.go:281] running [docker network inspect newest-cni-767000] to gather additional debugging logs...
	I0223 13:36:02.480621   22053 cli_runner.go:164] Run: docker network inspect newest-cni-767000
	W0223 13:36:02.534649   22053 cli_runner.go:211] docker network inspect newest-cni-767000 returned with exit code 1
	I0223 13:36:02.534676   22053 network_create.go:284] error running [docker network inspect newest-cni-767000]: docker network inspect newest-cni-767000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-767000
	I0223 13:36:02.534693   22053 network_create.go:286] output of [docker network inspect newest-cni-767000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-767000
	
	** /stderr **
	I0223 13:36:02.534769   22053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:36:02.591129   22053 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:36:02.591443   22053 network.go:209] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0013b9870}
	I0223 13:36:02.591458   22053 network_create.go:123] attempt to create docker network newest-cni-767000 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0223 13:36:02.591523   22053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000
	W0223 13:36:02.646468   22053 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000 returned with exit code 1
	W0223 13:36:02.646499   22053 network_create.go:148] failed to create docker network newest-cni-767000 192.168.58.0/24 with gateway 192.168.58.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:36:02.646517   22053 network_create.go:115] failed to create docker network newest-cni-767000 192.168.58.0/24, will retry: subnet is taken
	I0223 13:36:02.648074   22053 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:36:02.648384   22053 network.go:209] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f918f0}
	I0223 13:36:02.648394   22053 network_create.go:123] attempt to create docker network newest-cni-767000 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0223 13:36:02.648467   22053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000
	I0223 13:36:02.737066   22053 network_create.go:107] docker network newest-cni-767000 192.168.67.0/24 created
	I0223 13:36:02.737112   22053 kic.go:117] calculated static IP "192.168.67.2" for the "newest-cni-767000" container
	I0223 13:36:02.737237   22053 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:36:02.793347   22053 cli_runner.go:164] Run: docker volume create newest-cni-767000 --label name.minikube.sigs.k8s.io=newest-cni-767000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:36:02.847205   22053 oci.go:103] Successfully created a docker volume newest-cni-767000
	I0223 13:36:02.847325   22053 cli_runner.go:164] Run: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:36:02.989806   22053 cli_runner.go:211] docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:36:02.989851   22053 client.go:171] LocalClient.Create took 568.619611ms
	I0223 13:36:04.992218   22053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:36:04.992340   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:05.052340   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:05.052428   22053 retry.go:31] will retry after 255.096812ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:05.308802   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:05.369972   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:05.370065   22053 retry.go:31] will retry after 413.852935ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:05.786327   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:05.843267   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:05.843353   22053 retry.go:31] will retry after 406.520877ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:06.252286   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:06.310511   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:06.310598   22053 retry.go:31] will retry after 631.446854ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:06.944375   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:07.002123   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:36:07.002225   22053 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:07.002239   22053 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:07.002300   22053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:36:07.002348   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:07.056410   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:07.056499   22053 retry.go:31] will retry after 154.449058ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:07.213308   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:07.272387   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:07.272481   22053 retry.go:31] will retry after 527.087155ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:07.801893   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:07.862119   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:07.862220   22053 retry.go:31] will retry after 537.951085ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:08.401552   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:08.463225   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:36:08.463320   22053 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:08.463334   22053 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:08.463349   22053 start.go:128] duration metric: createHost completed in 6.064350227s
	I0223 13:36:08.463428   22053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:36:08.463479   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:08.519190   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:08.519271   22053 retry.go:31] will retry after 235.377192ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:08.756178   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:08.814212   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:08.814295   22053 retry.go:31] will retry after 399.594708ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:09.214384   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:09.273739   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:09.273823   22053 retry.go:31] will retry after 381.879474ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:09.658021   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:09.715482   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:09.715564   22053 retry.go:31] will retry after 536.208187ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:10.254193   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:10.314486   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:36:10.314574   22053 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:10.314591   22053 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:10.314649   22053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:36:10.314705   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:10.368455   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:10.368540   22053 retry.go:31] will retry after 269.306744ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:10.638606   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:10.698532   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:10.698612   22053 retry.go:31] will retry after 425.812483ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:11.124758   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:11.183815   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:11.183899   22053 retry.go:31] will retry after 385.213197ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:11.571459   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:11.628826   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:36:11.628914   22053 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:11.628929   22053 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:11.628933   22053 fix.go:57] fixHost completed within 25.482597124s
	I0223 13:36:11.628940   22053 start.go:83] releasing machines lock for "newest-cni-767000", held for 25.482629344s
	W0223 13:36:11.628955   22053 start.go:691] error starting host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	W0223 13:36:11.629094   22053 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	! StartHost failed, but will try again: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:36:11.629102   22053 start.go:706] Will try again in 5 seconds ...
	I0223 13:36:16.629684   22053 start.go:364] acquiring machines lock for newest-cni-767000: {Name:mka7b360626537fa2584605db7207cfd3caf5aca Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 13:36:16.629908   22053 start.go:368] acquired machines lock for "newest-cni-767000" in 182.624µs
	I0223 13:36:16.629969   22053 start.go:96] Skipping create...Using existing machine configuration
	I0223 13:36:16.629979   22053 fix.go:55] fixHost starting: 
	I0223 13:36:16.630399   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:16.692315   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:16.692361   22053 fix.go:103] recreateIfNeeded on newest-cni-767000: state= err=unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:16.692371   22053 fix.go:108] machineExists: false. err=machine does not exist
	I0223 13:36:16.714283   22053 out.go:177] * docker "newest-cni-767000" container is missing, will recreate.
	I0223 13:36:16.735876   22053 delete.go:124] DEMOLISHING newest-cni-767000 ...
	I0223 13:36:16.736056   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:16.792147   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:36:16.792197   22053 stop.go:75] unable to get state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:16.792212   22053 delete.go:129] stophost failed (probably ok): ssh power off: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:16.792581   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:16.845682   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:16.845734   22053 delete.go:82] Unable to get host status for newest-cni-767000, assuming it has already been deleted: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:16.845811   22053 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-767000
	W0223 13:36:16.898800   22053 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-767000 returned with exit code 1
	I0223 13:36:16.898828   22053 kic.go:367] could not find the container newest-cni-767000 to remove it. will try anyways
	I0223 13:36:16.898911   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:16.955201   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	W0223 13:36:16.955253   22053 oci.go:84] error getting container status, will try to delete anyways: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:16.955342   22053 cli_runner.go:164] Run: docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0"
	W0223 13:36:17.009773   22053 cli_runner.go:211] docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0" returned with exit code 1
	I0223 13:36:17.009812   22053 oci.go:641] error shutdown newest-cni-767000: docker exec --privileged -t newest-cni-767000 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:18.010671   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:18.066062   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:18.066106   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:18.066115   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:18.066134   22053 retry.go:31] will retry after 677.658366ms: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:18.745515   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:18.803678   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:18.803722   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:18.803730   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:18.803750   22053 retry.go:31] will retry after 586.79415ms: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:19.391527   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:19.449644   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:19.449690   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:19.449698   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:19.449719   22053 retry.go:31] will retry after 904.39548ms: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:20.355094   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:20.413181   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:20.413225   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:20.413233   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:20.413264   22053 retry.go:31] will retry after 1.263936007s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:21.678550   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:21.735843   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:21.735891   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:21.735900   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:21.735920   22053 retry.go:31] will retry after 1.484710532s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:23.221588   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:23.281607   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:23.281658   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:23.281666   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:23.281697   22053 retry.go:31] will retry after 5.57592019s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:28.859407   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:28.916573   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:28.916616   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:28.916626   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:28.916647   22053 retry.go:31] will retry after 5.206820583s: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:34.125900   22053 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:34.186280   22053 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:34.186324   22053 oci.go:653] temporary error verifying shutdown: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:34.186332   22053 oci.go:655] temporary error: container newest-cni-767000 status is  but expect it to be exited
	I0223 13:36:34.186357   22053 oci.go:88] couldn't shut down newest-cni-767000 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	 
	I0223 13:36:34.186435   22053 cli_runner.go:164] Run: docker rm -f -v newest-cni-767000
	I0223 13:36:34.243180   22053 cli_runner.go:164] Run: docker container inspect -f {{.Id}} newest-cni-767000
	W0223 13:36:34.297834   22053 cli_runner.go:211] docker container inspect -f {{.Id}} newest-cni-767000 returned with exit code 1
	I0223 13:36:34.297945   22053 cli_runner.go:164] Run: docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:36:34.352615   22053 cli_runner.go:164] Run: docker network rm newest-cni-767000
	W0223 13:36:34.455865   22053 delete.go:139] delete failed (probably ok) <nil>
	I0223 13:36:34.455884   22053 fix.go:115] Sleeping 1 second for extra luck!
	I0223 13:36:35.456913   22053 start.go:125] createHost starting for "" (driver="docker")
	I0223 13:36:35.499771   22053 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0223 13:36:35.500030   22053 start.go:159] libmachine.API.Create for "newest-cni-767000" (driver="docker")
	I0223 13:36:35.500058   22053 client.go:168] LocalClient.Create starting
	I0223 13:36:35.500267   22053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/ca.pem
	I0223 13:36:35.500348   22053 main.go:141] libmachine: Decoding PEM data...
	I0223 13:36:35.500386   22053 main.go:141] libmachine: Parsing certificate...
	I0223 13:36:35.500489   22053 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15909-825/.minikube/certs/cert.pem
	I0223 13:36:35.500553   22053 main.go:141] libmachine: Decoding PEM data...
	I0223 13:36:35.500575   22053 main.go:141] libmachine: Parsing certificate...
	I0223 13:36:35.501311   22053 cli_runner.go:164] Run: docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0223 13:36:35.557822   22053 cli_runner.go:211] docker network inspect newest-cni-767000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0223 13:36:35.557906   22053 network_create.go:281] running [docker network inspect newest-cni-767000] to gather additional debugging logs...
	I0223 13:36:35.557922   22053 cli_runner.go:164] Run: docker network inspect newest-cni-767000
	W0223 13:36:35.612146   22053 cli_runner.go:211] docker network inspect newest-cni-767000 returned with exit code 1
	I0223 13:36:35.612171   22053 network_create.go:284] error running [docker network inspect newest-cni-767000]: docker network inspect newest-cni-767000: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: newest-cni-767000
	I0223 13:36:35.612182   22053 network_create.go:286] output of [docker network inspect newest-cni-767000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: newest-cni-767000
	
	** /stderr **
	I0223 13:36:35.612261   22053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0223 13:36:35.668932   22053 network.go:212] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:36:35.670378   22053 network.go:212] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:36:35.671880   22053 network.go:212] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:36:35.672172   22053 network.go:209] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00100dcc0}
	I0223 13:36:35.672187   22053 network_create.go:123] attempt to create docker network newest-cni-767000 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0223 13:36:35.672256   22053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000
	W0223 13:36:35.726871   22053 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000 returned with exit code 1
	W0223 13:36:35.726903   22053 network_create.go:148] failed to create docker network newest-cni-767000 192.168.76.0/24 with gateway 192.168.76.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Pool overlaps with other one on this address space
	W0223 13:36:35.726917   22053 network_create.go:115] failed to create docker network newest-cni-767000 192.168.76.0/24, will retry: subnet is taken
	I0223 13:36:35.728435   22053 network.go:212] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I0223 13:36:35.728758   22053 network.go:209] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000d53260}
	I0223 13:36:35.728770   22053 network_create.go:123] attempt to create docker network newest-cni-767000 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0223 13:36:35.728833   22053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-767000 newest-cni-767000
	I0223 13:36:35.814577   22053 network_create.go:107] docker network newest-cni-767000 192.168.85.0/24 created
	I0223 13:36:35.814607   22053 kic.go:117] calculated static IP "192.168.85.2" for the "newest-cni-767000" container
	I0223 13:36:35.814725   22053 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0223 13:36:35.873453   22053 cli_runner.go:164] Run: docker volume create newest-cni-767000 --label name.minikube.sigs.k8s.io=newest-cni-767000 --label created_by.minikube.sigs.k8s.io=true
	I0223 13:36:35.927330   22053 oci.go:103] Successfully created a docker volume newest-cni-767000
	I0223 13:36:35.927469   22053 cli_runner.go:164] Run: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib
	W0223 13:36:36.061982   22053 cli_runner.go:211] docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib returned with exit code 125
	I0223 13:36:36.062025   22053 client.go:171] LocalClient.Create took 561.960402ms
	I0223 13:36:38.062946   22053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:36:38.063156   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:38.122157   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:38.122247   22053 retry.go:31] will retry after 153.943618ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:38.278547   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:38.334374   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:38.334459   22053 retry.go:31] will retry after 447.842726ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:38.783735   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:38.840005   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:38.840091   22053 retry.go:31] will retry after 498.87361ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:39.341366   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:39.402339   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:39.402424   22053 retry.go:31] will retry after 539.418691ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:39.943270   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:40.000022   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:36:40.000117   22053 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:40.000134   22053 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:40.000192   22053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:36:40.000239   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:40.054098   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:40.054186   22053 retry.go:31] will retry after 361.273067ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:40.417899   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:40.477104   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:40.477196   22053 retry.go:31] will retry after 217.404603ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:40.697002   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:40.756691   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:40.756798   22053 retry.go:31] will retry after 641.509252ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:41.398989   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:41.454149   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:36:41.454241   22053 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:41.454259   22053 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:41.454263   22053 start.go:128] duration metric: createHost completed in 5.997318435s
	I0223 13:36:41.454334   22053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 13:36:41.454384   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:41.508410   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:41.508499   22053 retry.go:31] will retry after 303.317349ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:41.812912   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:41.872492   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:41.872576   22053 retry.go:31] will retry after 561.808566ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:42.436219   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:42.495326   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:42.495410   22053 retry.go:31] will retry after 301.637366ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:42.798840   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:42.857413   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:36:42.857506   22053 start.go:275] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:42.857520   22053 start.go:242] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:42.857578   22053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0223 13:36:42.857643   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:42.914502   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:42.914583   22053 retry.go:31] will retry after 154.889201ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:43.071431   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:43.128115   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:43.128199   22053 retry.go:31] will retry after 319.698831ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:43.448071   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:43.507952   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	I0223 13:36:43.508036   22053 retry.go:31] will retry after 309.282338ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:43.818402   22053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000
	W0223 13:36:43.879108   22053 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000 returned with exit code 1
	W0223 13:36:43.879213   22053 start.go:290] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:43.879233   22053 start.go:247] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "newest-cni-767000": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-767000: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	I0223 13:36:43.879237   22053 fix.go:57] fixHost completed within 27.249210485s
	I0223 13:36:43.879244   22053 start.go:83] releasing machines lock for "newest-cni-767000", held for 27.249274345s
	W0223 13:36:43.879398   22053 out.go:239] * Failed to start docker container. Running "minikube delete -p newest-cni-767000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	* Failed to start docker container. Running "minikube delete -p newest-cni-767000" may fix it: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	I0223 13:36:43.922965   22053 out.go:177] 
	W0223 13:36:43.945160   22053 out.go:239] X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	X Exiting due to GUEST_PROVISION: Failed to start host: recreate: creating host: create: creating: setting up container node: preparing volume for newest-cni-767000 container: docker run --rm --name newest-cni-767000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-767000 --entrypoint /usr/bin/test -v newest-cni-767000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc -d /var/lib: exit status 125
	stdout:
	
	stderr:
	docker: Error response from daemon: connection error: desc = "transport: Error while dialing dial unix /var/run/desktop-containerd/containerd.sock: connect: connection refused": unavailable.
	
	W0223 13:36:43.945190   22053 out.go:239] * 
	* 
	W0223 13:36:43.946576   22053 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:36:44.009091   22053 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p newest-cni-767000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --kubernetes-version=v1.26.1": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-767000
helpers_test.go:235: (dbg) docker inspect newest-cni-767000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-767000",
	        "Id": "c1bc5b4fcc695c441eceb8554347c76e43d638fcbb9c175e83aa658352534793",
	        "Created": "2023-02-23T21:36:35.778350569Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-767000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000: exit status 7 (99.956243ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:36:44.208822   22319 status.go:249] status error: host: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-767000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (58.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-767000 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p newest-cni-767000 "sudo crictl images -o json": exit status 80 (192.990338ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_bc6d6f4ab23dc964da06b9c7910ecd825d31f73e_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed tp get images inside minikube. args "out/minikube-darwin-amd64 ssh -p newest-cni-767000 \"sudo crictl images -o json\"": exit status 80
start_stop_delete_test.go:304: failed to decode images json unexpected end of JSON input. output:

                                                
                                                

                                                
                                                
start_stop_delete_test.go:304: v1.26.1 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.9.3",
- 	"registry.k8s.io/etcd:3.5.6-0",
- 	"registry.k8s.io/kube-apiserver:v1.26.1",
- 	"registry.k8s.io/kube-controller-manager:v1.26.1",
- 	"registry.k8s.io/kube-proxy:v1.26.1",
- 	"registry.k8s.io/kube-scheduler:v1.26.1",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/VerifyKubernetesImages]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-767000
helpers_test.go:235: (dbg) docker inspect newest-cni-767000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-767000",
	        "Id": "c1bc5b4fcc695c441eceb8554347c76e43d638fcbb9c175e83aa658352534793",
	        "Created": "2023-02-23T21:36:35.778350569Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-767000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000: exit status 7 (101.623824ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:36:44.562863   22329 status.go:249] status error: host: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-767000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (0.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-767000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 pause -p newest-cni-767000 --alsologtostderr -v=1: exit status 80 (191.609947ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:36:44.607603   22333 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:36:44.607782   22333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:36:44.607786   22333 out.go:309] Setting ErrFile to fd 2...
	I0223 13:36:44.607790   22333 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:36:44.607898   22333 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:36:44.608221   22333 out.go:303] Setting JSON to false
	I0223 13:36:44.608237   22333 mustload.go:65] Loading cluster: newest-cni-767000
	I0223 13:36:44.608511   22333 config.go:182] Loaded profile config "newest-cni-767000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:36:44.608892   22333 cli_runner.go:164] Run: docker container inspect newest-cni-767000 --format={{.State.Status}}
	W0223 13:36:44.662951   22333 cli_runner.go:211] docker container inspect newest-cni-767000 --format={{.State.Status}} returned with exit code 1
	I0223 13:36:44.685223   22333 out.go:177] 
	W0223 13:36:44.707092   22333 out.go:239] X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	X Exiting due to GUEST_STATUS: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000
	
	W0223 13:36:44.707125   22333 out.go:239] * 
	* 
	W0223 13:36:44.711702   22333 out.go:239] ╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                          │
	│    * If the above advice does not help, please let us know:                                                              │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                            │
	│                                                                                                                          │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                 │
	│    * Please also attach the following file to the GitHub issue:                                                          │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│                                                                                                                          │
	╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I0223 13:36:44.732686   22333 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-darwin-amd64 pause -p newest-cni-767000 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-767000
helpers_test.go:235: (dbg) docker inspect newest-cni-767000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-767000",
	        "Id": "c1bc5b4fcc695c441eceb8554347c76e43d638fcbb9c175e83aa658352534793",
	        "Created": "2023-02-23T21:36:35.778350569Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-767000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000: exit status 7 (100.993413ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:36:44.914487   22339 status.go:249] status error: host: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-767000" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect newest-cni-767000
helpers_test.go:235: (dbg) docker inspect newest-cni-767000:

                                                
                                                
-- stdout --
	[
	    {
	        "Name": "newest-cni-767000",
	        "Id": "c1bc5b4fcc695c441eceb8554347c76e43d638fcbb9c175e83aa658352534793",
	        "Created": "2023-02-23T21:36:35.778350569Z",
	        "Scope": "local",
	        "Driver": "bridge",
	        "EnableIPv6": false,
	        "IPAM": {
	            "Driver": "default",
	            "Options": {},
	            "Config": [
	                {
	                    "Subnet": "192.168.85.0/24",
	                    "Gateway": "192.168.85.1"
	                }
	            ]
	        },
	        "Internal": false,
	        "Attachable": false,
	        "Ingress": false,
	        "ConfigFrom": {
	            "Network": ""
	        },
	        "ConfigOnly": false,
	        "Containers": {},
	        "Options": {
	            "--icc": "",
	            "--ip-masq": "",
	            "com.docker.network.driver.mtu": "1500"
	        },
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "newest-cni-767000"
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-767000 -n newest-cni-767000: exit status 7 (101.157957ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:36:45.075960   22347 status.go:249] status error: host: state: unknown state "newest-cni-767000": docker container inspect newest-cni-767000 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: newest-cni-767000

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "newest-cni-767000" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (0.51s)

                                                
                                    

Test pass (163/253)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 24.96
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.31
10 TestDownloadOnly/v1.26.1/json-events 18.77
11 TestDownloadOnly/v1.26.1/preload-exists 0
14 TestDownloadOnly/v1.26.1/kubectl 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.44
16 TestDownloadOnly/DeleteAll 0.66
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.38
18 TestDownloadOnlyKic 1.99
19 TestBinaryMirror 1.65
20 TestOffline 55.63
22 TestAddons/Setup 148.29
26 TestAddons/parallel/MetricsServer 5.58
27 TestAddons/parallel/HelmTiller 11.21
29 TestAddons/parallel/CSI 70.28
30 TestAddons/parallel/Headlamp 10.51
31 TestAddons/parallel/CloudSpanner 5.47
34 TestAddons/serial/GCPAuth/Namespaces 0.15
35 TestAddons/StoppedEnableDisable 11.42
36 TestCertOptions 43.18
37 TestCertExpiration 359.59
38 TestDockerFlags 37.96
39 TestForceSystemdFlag 32.47
40 TestForceSystemdEnv 34.78
42 TestHyperKitDriverInstallOrUpdate 7.04
45 TestErrorSpam/setup 31.95
46 TestErrorSpam/start 2.45
47 TestErrorSpam/status 1.22
48 TestErrorSpam/pause 1.78
49 TestErrorSpam/unpause 1.88
50 TestErrorSpam/stop 11.51
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 49.08
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 44.59
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.07
61 TestFunctional/serial/CacheCmd/cache/add_remote 8.17
62 TestFunctional/serial/CacheCmd/cache/add_local 1.66
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.07
64 TestFunctional/serial/CacheCmd/cache/list 0.07
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.41
66 TestFunctional/serial/CacheCmd/cache/cache_reload 2.82
67 TestFunctional/serial/CacheCmd/cache/delete 0.14
68 TestFunctional/serial/MinikubeKubectlCmd 0.55
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.81
70 TestFunctional/serial/ExtraConfig 42.77
71 TestFunctional/serial/ComponentHealth 0.05
72 TestFunctional/serial/LogsCmd 2.89
73 TestFunctional/serial/LogsFileCmd 3.08
75 TestFunctional/parallel/ConfigCmd 0.43
76 TestFunctional/parallel/DashboardCmd 8.84
77 TestFunctional/parallel/DryRun 1.72
78 TestFunctional/parallel/InternationalLanguage 0.65
79 TestFunctional/parallel/StatusCmd 1.22
84 TestFunctional/parallel/AddonsCmd 0.24
85 TestFunctional/parallel/PersistentVolumeClaim 32.01
87 TestFunctional/parallel/SSHCmd 0.78
88 TestFunctional/parallel/CpCmd 2.13
89 TestFunctional/parallel/MySQL 27.4
90 TestFunctional/parallel/FileSync 0.44
91 TestFunctional/parallel/CertSync 2.77
95 TestFunctional/parallel/NodeLabels 0.1
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
99 TestFunctional/parallel/License 0.82
100 TestFunctional/parallel/Version/short 0.14
101 TestFunctional/parallel/Version/components 1.02
102 TestFunctional/parallel/ImageCommands/ImageListShort 0.36
103 TestFunctional/parallel/ImageCommands/ImageListTable 0.38
104 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
105 TestFunctional/parallel/ImageCommands/ImageListYaml 0.41
106 TestFunctional/parallel/ImageCommands/ImageBuild 4.23
107 TestFunctional/parallel/ImageCommands/Setup 2.89
108 TestFunctional/parallel/DockerEnv/bash 1.85
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.29
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.43
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.78
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.69
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.88
115 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.08
116 TestFunctional/parallel/ImageCommands/ImageRemove 0.72
117 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.39
118 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.67
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.14
123 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 0.71
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.51
131 TestFunctional/parallel/ProfileCmd/profile_list 0.48
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.48
133 TestFunctional/parallel/MountCmd/any-port 12.65
134 TestFunctional/parallel/MountCmd/specific-port 2.49
135 TestFunctional/delete_addon-resizer_images 0.15
136 TestFunctional/delete_my-image_image 0.06
137 TestFunctional/delete_minikube_cached_images 0.06
141 TestImageBuild/serial/NormalBuild 2.4
142 TestImageBuild/serial/BuildWithBuildArg 0.98
143 TestImageBuild/serial/BuildWithDockerIgnore 0.47
144 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.41
154 TestJSONOutput/start/Command 46.94
155 TestJSONOutput/start/Audit 0
157 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
158 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
160 TestJSONOutput/pause/Command 0.64
161 TestJSONOutput/pause/Audit 0
163 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
164 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
166 TestJSONOutput/unpause/Command 0.61
167 TestJSONOutput/unpause/Audit 0
169 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
170 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
172 TestJSONOutput/stop/Command 10.86
173 TestJSONOutput/stop/Audit 0
175 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
177 TestErrorJSONOutput 0.76
179 TestKicCustomNetwork/create_custom_network 35.34
180 TestKicCustomNetwork/use_default_bridge_network 29.82
181 TestKicExistingNetwork 29.81
182 TestKicCustomSubnet 30.8
183 TestKicStaticIP 31.19
184 TestMainNoArgs 0.07
185 TestMinikubeProfile 67.9
188 TestMountStart/serial/StartWithMountFirst 8.17
189 TestMountStart/serial/VerifyMountFirst 0.39
190 TestMountStart/serial/StartWithMountSecond 8.1
191 TestMountStart/serial/VerifyMountSecond 0.4
192 TestMountStart/serial/DeleteFirst 2.12
193 TestMountStart/serial/VerifyMountPostDelete 0.39
194 TestMountStart/serial/Stop 1.6
195 TestMountStart/serial/RestartStopped 5.95
196 TestMountStart/serial/VerifyMountPostStop 0.4
199 TestMultiNode/serial/FreshStart2Nodes 90.64
202 TestMultiNode/serial/AddNode 21.62
203 TestMultiNode/serial/ProfileList 0.45
204 TestMultiNode/serial/CopyFile 14.49
205 TestMultiNode/serial/StopNode 2.99
206 TestMultiNode/serial/StartAfterStop 10.01
207 TestMultiNode/serial/RestartKeepsNodes 86.72
208 TestMultiNode/serial/DeleteNode 6.15
209 TestMultiNode/serial/StopMultiNode 21.88
210 TestMultiNode/serial/RestartMultiNode 55.11
211 TestMultiNode/serial/ValidateNameConflict 32.8
215 TestPreload 135.26
217 TestScheduledStopUnix 101.75
218 TestSkaffold 67.32
220 TestInsufficientStorage 14.86
236 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 18.52
237 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 27.1
238 TestStoppedBinaryUpgrade/Setup 4.25
250 TestNoKubernetes/serial/StartNoK8sWithVersion 0.41
254 TestNoKubernetes/serial/VerifyK8sNotRunning 0.19
255 TestNoKubernetes/serial/ProfileList 10.73
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
315 TestStartStop/group/newest-cni/serial/DeployApp 0
316 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.22
320 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
321 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
x
+
TestDownloadOnly/v1.16.0/json-events (24.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-653000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-653000 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (24.960610583s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (24.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-653000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-653000: exit status 85 (309.490074ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-653000 | jenkins | v1.29.0 | 23 Feb 23 12:32 PST |          |
	|         | -p download-only-653000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 12:32:52
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 12:32:52.021352    2059 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:32:52.021528    2059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:32:52.021533    2059 out.go:309] Setting ErrFile to fd 2...
	I0223 12:32:52.021537    2059 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:32:52.021650    2059 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	W0223 12:32:52.021754    2059 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-825/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-825/.minikube/config/config.json: no such file or directory
	I0223 12:32:52.023333    2059 out.go:303] Setting JSON to true
	I0223 12:32:52.042033    2059 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":147,"bootTime":1677184225,"procs":379,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 12:32:52.042131    2059 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 12:32:52.063564    2059 out.go:97] [download-only-653000] minikube v1.29.0 on Darwin 13.2
	I0223 12:32:52.085241    2059 out.go:169] MINIKUBE_LOCATION=15909
	W0223 12:32:52.063848    2059 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball: no such file or directory
	I0223 12:32:52.063871    2059 notify.go:220] Checking for updates...
	I0223 12:32:52.128206    2059 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:32:52.149456    2059 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 12:32:52.170433    2059 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 12:32:52.191381    2059 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	W0223 12:32:52.233288    2059 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 12:32:52.233586    2059 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 12:32:52.293475    2059 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 12:32:52.293589    2059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:32:52.436983    2059 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 20:32:52.343822845 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:32:52.458195    2059 out.go:97] Using the docker driver based on user configuration
	I0223 12:32:52.458306    2059 start.go:296] selected driver: docker
	I0223 12:32:52.458319    2059 start.go:857] validating driver "docker" against <nil>
	I0223 12:32:52.458517    2059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:32:52.599924    2059 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 20:32:52.508136042 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:32:52.600018    2059 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 12:32:52.604372    2059 start_flags.go:386] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I0223 12:32:52.604549    2059 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 12:32:52.625555    2059 out.go:169] Using Docker Desktop driver with root privileges
	I0223 12:32:52.646387    2059 cni.go:84] Creating CNI manager for ""
	I0223 12:32:52.646427    2059 cni.go:161] CNI unnecessary in this configuration, recommending no CNI
	I0223 12:32:52.646441    2059 start_flags.go:319] config:
	{Name:download-only-653000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-653000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:32:52.668325    2059 out.go:97] Starting control plane node download-only-653000 in cluster download-only-653000
	I0223 12:32:52.668436    2059 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 12:32:52.689340    2059 out.go:97] Pulling base image ...
	I0223 12:32:52.689452    2059 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 12:32:52.689570    2059 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 12:32:52.744177    2059 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 12:32:52.744499    2059 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0223 12:32:52.744625    2059 image.go:119] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 12:32:52.787448    2059 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 12:32:52.787482    2059 cache.go:57] Caching tarball of preloaded images
	I0223 12:32:52.787819    2059 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 12:32:52.809459    2059 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0223 12:32:52.809527    2059 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 12:32:53.023664    2059 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I0223 12:33:10.824047    2059 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 12:33:10.824190    2059 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I0223 12:33:11.367765    2059 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I0223 12:33:11.367955    2059 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/download-only-653000/config.json ...
	I0223 12:33:11.367980    2059 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/download-only-653000/config.json: {Name:mk4f33063298bd27397fce42e9771fba0078ae73 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 12:33:11.368249    2059 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I0223 12:33:11.368507    2059 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-653000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (18.77s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-653000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-653000 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=docker --driver=docker : (18.765375587s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (18.77s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
--- PASS: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-653000
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-653000: exit status 85 (440.000987ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-653000 | jenkins | v1.29.0 | 23 Feb 23 12:32 PST |          |
	|         | -p download-only-653000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-653000 | jenkins | v1.29.0 | 23 Feb 23 12:33 PST |          |
	|         | -p download-only-653000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 12:33:17
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.20.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 12:33:17.296261    2113 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:33:17.296424    2113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:33:17.296429    2113 out.go:309] Setting ErrFile to fd 2...
	I0223 12:33:17.296433    2113 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:33:17.296541    2113 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	W0223 12:33:17.296643    2113 root.go:312] Error reading config file at /Users/jenkins/minikube-integration/15909-825/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15909-825/.minikube/config/config.json: no such file or directory
	I0223 12:33:17.297828    2113 out.go:303] Setting JSON to true
	I0223 12:33:17.316274    2113 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":172,"bootTime":1677184225,"procs":377,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 12:33:17.316360    2113 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 12:33:17.337813    2113 out.go:97] [download-only-653000] minikube v1.29.0 on Darwin 13.2
	I0223 12:33:17.337965    2113 notify.go:220] Checking for updates...
	I0223 12:33:17.359069    2113 out.go:169] MINIKUBE_LOCATION=15909
	I0223 12:33:17.381410    2113 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:33:17.408003    2113 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 12:33:17.430134    2113 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 12:33:17.451820    2113 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	W0223 12:33:17.493714    2113 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 12:33:17.494400    2113 config.go:182] Loaded profile config "download-only-653000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W0223 12:33:17.494478    2113 start.go:765] api.Load failed for download-only-653000: filestore "download-only-653000": Docker machine "download-only-653000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 12:33:17.494550    2113 driver.go:365] Setting default libvirt URI to qemu:///system
	W0223 12:33:17.494588    2113 start.go:765] api.Load failed for download-only-653000: filestore "download-only-653000": Docker machine "download-only-653000" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 12:33:17.554904    2113 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 12:33:17.554997    2113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:33:17.696808    2113 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 20:33:17.605116899 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:33:17.718497    2113 out.go:97] Using the docker driver based on existing profile
	I0223 12:33:17.718589    2113 start.go:296] selected driver: docker
	I0223 12:33:17.718599    2113 start.go:857] validating driver "docker" against &{Name:download-only-653000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-653000 Namespace:default APIServerName:
minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP:}
	I0223 12:33:17.718878    2113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:33:17.861343    2113 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:false NGoroutines:48 SystemTime:2023-02-23 20:33:17.770253151 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:33:17.863860    2113 cni.go:84] Creating CNI manager for ""
	I0223 12:33:17.863880    2113 cni.go:157] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0223 12:33:17.863892    2113 start_flags.go:319] config:
	{Name:download-only-653000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-653000 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRu
ntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:33:17.885352    2113 out.go:97] Starting control plane node download-only-653000 in cluster download-only-653000
	I0223 12:33:17.885471    2113 cache.go:120] Beginning downloading kic base image for docker with docker
	I0223 12:33:17.906414    2113 out.go:97] Pulling base image ...
	I0223 12:33:17.906470    2113 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:33:17.906577    2113 image.go:77] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local docker daemon
	I0223 12:33:17.961478    2113 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc to local cache
	I0223 12:33:17.961635    2113 image.go:61] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory
	I0223 12:33:17.961657    2113 image.go:64] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc in local cache directory, skipping pull
	I0223 12:33:17.961662    2113 image.go:103] gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc exists in cache, skipping pull
	I0223 12:33:17.961679    2113 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc as a tarball
	I0223 12:33:18.021063    2113 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	I0223 12:33:18.021102    2113 cache.go:57] Caching tarball of preloaded images
	I0223 12:33:18.021463    2113 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime docker
	I0223 12:33:18.043220    2113 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0223 12:33:18.043331    2113 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4 ...
	I0223 12:33:18.248909    2113 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4?checksum=md5:c6cc8ea1da4e19500d6fe35540785ea8 -> /Users/jenkins/minikube-integration/15909-825/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-653000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.44s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.66s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-653000
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.38s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.99s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-351000 --alsologtostderr --driver=docker 
helpers_test.go:175: Cleaning up "download-docker-351000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-351000
--- PASS: TestDownloadOnlyKic (1.99s)

                                                
                                    
x
+
TestBinaryMirror (1.65s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-271000 --alsologtostderr --binary-mirror http://127.0.0.1:49401 --driver=docker 
aaa_download_only_test.go:308: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-271000 --alsologtostderr --binary-mirror http://127.0.0.1:49401 --driver=docker : (1.040369491s)
helpers_test.go:175: Cleaning up "binary-mirror-271000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-271000
--- PASS: TestBinaryMirror (1.65s)

                                                
                                    
x
+
TestOffline (55.63s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-243000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-243000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (52.863107986s)
helpers_test.go:175: Cleaning up "offline-docker-243000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-243000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-243000: (2.766696517s)
--- PASS: TestOffline (55.63s)

                                                
                                    
x
+
TestAddons/Setup (148.29s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-401000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-darwin-amd64 start -p addons-401000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.286888267s)
--- PASS: TestAddons/Setup (148.29s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.58s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.151348ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-7m576" [f68ebd30-632d-45a7-8ab9-3e925ffcf907] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.009994686s
addons_test.go:380: (dbg) Run:  kubectl --context addons-401000 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.58s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.21s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 2.43806ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-pghtk" [6d4d0b54-1e29-4937-8a7b-0ec0fa5617d0] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.008075443s
addons_test.go:438: (dbg) Run:  kubectl --context addons-401000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-401000 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.668074508s)
addons_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.21s)

                                                
                                    
x
+
TestAddons/parallel/CSI (70.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 4.367202ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [75af7998-7bee-4b2c-af5d-6fe55f81b680] Pending
helpers_test.go:344: "task-pv-pod" [75af7998-7bee-4b2c-af5d-6fe55f81b680] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [75af7998-7bee-4b2c-af5d-6fe55f81b680] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.008659003s
addons_test.go:549: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-401000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-401000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-401000 delete pod task-pv-pod
addons_test.go:565: (dbg) Run:  kubectl --context addons-401000 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-401000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-401000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [31a959cf-d302-4408-981d-cf39302dee38] Pending
helpers_test.go:344: "task-pv-pod-restore" [31a959cf-d302-4408-981d-cf39302dee38] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [31a959cf-d302-4408-981d-cf39302dee38] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.0106331s
addons_test.go:591: (dbg) Run:  kubectl --context addons-401000 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-401000 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-401000 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-darwin-amd64 -p addons-401000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.501271557s)
addons_test.go:607: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (70.28s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.51s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-401000 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-401000 --alsologtostderr -v=1: (1.505386841s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-vb5q6" [e0c6dec6-f49a-4188-847c-a308e5b14555] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-vb5q6" [e0c6dec6-f49a-4188-847c-a308e5b14555] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.006227639s
--- PASS: TestAddons/parallel/Headlamp (10.51s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-9z8zk" [17a601ab-ed37-4927-bf1a-79044f1e926a] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007547001s
addons_test.go:813: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-401000
--- PASS: TestAddons/parallel/CloudSpanner (5.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-401000 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-401000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.15s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.42s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-401000
addons_test.go:147: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-401000: (10.996599249s)
addons_test.go:151: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-401000
addons_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-401000
--- PASS: TestAddons/StoppedEnableDisable (11.42s)

                                                
                                    
x
+
TestCertOptions (43.18s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-267000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
E0223 13:11:09.883551    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-267000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (39.67820487s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-267000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-267000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-267000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-267000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-267000: (2.61754964s)
--- PASS: TestCertOptions (43.18s)

                                                
                                    
x
+
TestCertExpiration (359.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-946000 --memory=2048 --cert-expiration=3m --driver=docker 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-946000 --memory=2048 --cert-expiration=3m --driver=docker : (28.747808025s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-946000 --memory=2048 --cert-expiration=8760h --driver=docker 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-946000 --memory=2048 --cert-expiration=8760h --driver=docker : (30.839464471s)
helpers_test.go:175: Cleaning up "cert-expiration-946000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-946000
helpers_test.go:178: (dbg) Non-zero exit: out/minikube-darwin-amd64 delete -p cert-expiration-946000: signal: killed (2m0.003188601s)

                                                
                                                
-- stdout --
	* Deleting "cert-expiration-946000" in docker ...
	* Deleting container "cert-expiration-946000" ...
	* Stopping node "cert-expiration-946000"  ...
	* Powering off "cert-expiration-946000" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:14:06.265217   12970 delete.go:56] error deleting container "cert-expiration-946000". You may want to delete it manually :
	delete cert-expiration-946000: docker rm -f -v cert-expiration-946000: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Could not kill running container 200aff805f197c8c3ac80ab6f6d5ab6eed91044d4a4ad4c8cd4de931794c3d11, cannot remove - tried to kill container, but did not receive an exit event

                                                
                                                
** /stderr **
helpers_test.go:180: failed cleanup: signal: killed
--- PASS: TestCertExpiration (359.59s)

                                                
                                    
x
+
TestDockerFlags (37.96s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-390000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-390000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (34.531283075s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-390000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-390000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-390000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-390000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-390000: (2.612742355s)
--- PASS: TestDockerFlags (37.96s)

                                                
                                    
x
+
TestForceSystemdFlag (32.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-598000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-598000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (28.555391802s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-598000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-598000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-598000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-598000: (3.469599503s)
--- PASS: TestForceSystemdFlag (32.47s)

                                                
                                    
x
+
TestForceSystemdEnv (34.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-389000 --memory=2048 --alsologtostderr -v=5 --driver=docker 
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-389000 --memory=2048 --alsologtostderr -v=5 --driver=docker : (31.481872228s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-389000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-389000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-389000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-389000: (2.856075162s)
--- PASS: TestForceSystemdEnv (34.78s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.04s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.04s)

                                                
                                    
x
+
TestErrorSpam/setup (31.95s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-357000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-357000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 --driver=docker : (31.946797311s)
--- PASS: TestErrorSpam/setup (31.95s)

                                                
                                    
x
+
TestErrorSpam/start (2.45s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 start --dry-run
--- PASS: TestErrorSpam/start (2.45s)

                                                
                                    
x
+
TestErrorSpam/status (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 status
--- PASS: TestErrorSpam/status (1.22s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.88s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 unpause
--- PASS: TestErrorSpam/unpause (1.88s)

                                                
                                    
x
+
TestErrorSpam/stop (11.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 stop: (10.894501291s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-357000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-357000 stop
--- PASS: TestErrorSpam/stop (11.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /Users/jenkins/minikube-integration/15909-825/.minikube/files/etc/test/nested/copy/2057/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (49.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-615000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2199: (dbg) Done: out/minikube-darwin-amd64 start -p functional-615000 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (49.076626347s)
--- PASS: TestFunctional/serial/StartWithProxy (49.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (44.59s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-615000 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-darwin-amd64 start -p functional-615000 --alsologtostderr -v=8: (44.592533173s)
functional_test.go:657: soft start took 44.593099881s for "functional-615000" cluster.
--- PASS: TestFunctional/serial/SoftStart (44.59s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-615000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (8.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 cache add k8s.gcr.io/pause:3.1
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 cache add k8s.gcr.io/pause:3.1: (2.823509246s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 cache add k8s.gcr.io/pause:3.3: (2.7486162s)
functional_test.go:1043: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 cache add k8s.gcr.io/pause:latest
functional_test.go:1043: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 cache add k8s.gcr.io/pause:latest: (2.595557803s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (8.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.66s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-615000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2692981132/001
functional_test.go:1083: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 cache add minikube-local-cache-test:functional-615000
functional_test.go:1083: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 cache add minikube-local-cache-test:functional-615000: (1.127498642s)
functional_test.go:1088: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 cache delete minikube-local-cache-test:functional-615000
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-615000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.66s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.41s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-615000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (389.064314ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 cache reload
functional_test.go:1152: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 cache reload: (1.601713239s)
functional_test.go:1157: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 kubectl -- --context functional-615000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.55s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.81s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-615000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.81s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-615000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0223 12:41:09.818836    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:09.824610    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:09.834772    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:09.854888    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:09.895249    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:09.975417    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:10.136168    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:10.456294    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:11.098139    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:12.378302    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:14.939684    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:20.060032    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
E0223 12:41:30.300463    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
functional_test.go:751: (dbg) Done: out/minikube-darwin-amd64 start -p functional-615000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.766061636s)
functional_test.go:755: restart took 42.766220878s for "functional-615000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-615000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 logs
functional_test.go:1230: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 logs: (2.886214882s)
--- PASS: TestFunctional/serial/LogsCmd (2.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.08s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1468868083/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1468868083/001/logs.txt: (3.079904901s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.08s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-615000 config get cpus: exit status 14 (45.011761ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-615000 config get cpus: exit status 14 (66.634836ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-615000 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-615000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 4755: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.84s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-615000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:968: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-615000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (804.509125ms)

                                                
                                                
-- stdout --
	* [functional-615000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 12:42:50.018965    4688 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:42:50.019639    4688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:42:50.019645    4688 out.go:309] Setting ErrFile to fd 2...
	I0223 12:42:50.019650    4688 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:42:50.019875    4688 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 12:42:50.021549    4688 out.go:303] Setting JSON to false
	I0223 12:42:50.040643    4688 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":745,"bootTime":1677184225,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 12:42:50.040805    4688 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 12:42:50.065878    4688 out.go:177] * [functional-615000] minikube v1.29.0 on Darwin 13.2
	I0223 12:42:50.107867    4688 notify.go:220] Checking for updates...
	I0223 12:42:50.129415    4688 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 12:42:50.171568    4688 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:42:50.213464    4688 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 12:42:50.255739    4688 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 12:42:50.297447    4688 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 12:42:50.318918    4688 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 12:42:50.340876    4688 config.go:182] Loaded profile config "functional-615000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:42:50.341311    4688 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 12:42:50.404284    4688 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 12:42:50.404423    4688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:42:50.546677    4688 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 20:42:50.455117775 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:42:50.568498    4688 out.go:177] * Using the docker driver based on existing profile
	I0223 12:42:50.589059    4688 start.go:296] selected driver: docker
	I0223 12:42:50.589076    4688 start.go:857] validating driver "docker" against &{Name:functional-615000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-615000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:42:50.589180    4688 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 12:42:50.629080    4688 out.go:177] 
	W0223 12:42:50.666092    4688 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0223 12:42:50.703093    4688 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-615000 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-615000 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-615000 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (652.087143ms)

                                                
                                                
-- stdout --
	* [functional-615000] minikube v1.29.0 sur Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 12:42:51.732731    4729 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:42:51.732904    4729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:42:51.732909    4729 out.go:309] Setting ErrFile to fd 2...
	I0223 12:42:51.732912    4729 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:42:51.733033    4729 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 12:42:51.734621    4729 out.go:303] Setting JSON to false
	I0223 12:42:51.753349    4729 start.go:125] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":746,"bootTime":1677184225,"procs":399,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.2","kernelVersion":"22.3.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0223 12:42:51.753480    4729 start.go:133] gopshost.Virtualization returned error: not implemented yet
	I0223 12:42:51.774653    4729 out.go:177] * [functional-615000] minikube v1.29.0 sur Darwin 13.2
	I0223 12:42:51.816888    4729 notify.go:220] Checking for updates...
	I0223 12:42:51.837665    4729 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 12:42:51.858937    4729 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	I0223 12:42:51.880816    4729 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0223 12:42:51.901595    4729 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 12:42:51.922939    4729 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	I0223 12:42:51.944609    4729 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 12:42:51.966217    4729 config.go:182] Loaded profile config "functional-615000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:42:51.966611    4729 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 12:42:52.030165    4729 docker.go:121] docker version: linux-20.10.22:Docker Desktop 4.16.2 (95914)
	I0223 12:42:52.030289    4729 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0223 12:42:52.172532    4729 info.go:266] docker info: {ID:FVKD:EH3U:6NRJ:UVYM:QK2L:QWGF:RWMD:TD3Y:WQHD:TMXK:AEHL:XECO Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:61 OomKillDisable:false NGoroutines:56 SystemTime:2023-02-23 20:42:52.080322981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:5 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231715840 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.22 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9ba4b250366a5ddde94bb7c9d1def331423aa323 Expected:9ba4b250366a5ddde94bb7c9d1def331423aa323} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/Users/jenkins/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-buildx] ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.10.0] map[Name:compose Path:/Users/jenkins/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-compose] ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.15.1] map[Name:dev Path:/Users/jenkins/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-dev] ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.5] map[Name:extension Path:/Users/jenkins/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-extension] ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.17] map[Name:sbom Path
:/Users/jenkins/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-sbom] ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/Users/jenkins/.docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShadowedPaths:[/usr/local/lib/docker/cli-plugins/docker-scan] ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0223 12:42:52.194148    4729 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0223 12:42:52.236205    4729 start.go:296] selected driver: docker
	I0223 12:42:52.236238    4729 start.go:857] validating driver "docker" against &{Name:functional-615000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:functional-615000 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L Mou
ntGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 12:42:52.236339    4729 start.go:868] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 12:42:52.261169    4729 out.go:177] 
	W0223 12:42:52.282241    4729 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0223 12:42:52.303292    4729 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 status
functional_test.go:854: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (32.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3fb245b4-23f5-44f4-aa8a-80085897aaac] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.008248275s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-615000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-615000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-615000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-615000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02242e54-67f1-4dc0-8dae-7a6eb3bd1817] Pending
helpers_test.go:344: "sp-pod" [02242e54-67f1-4dc0-8dae-7a6eb3bd1817] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0223 12:42:31.743957    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [02242e54-67f1-4dc0-8dae-7a6eb3bd1817] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.006572636s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-615000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-615000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-615000 delete -f testdata/storage-provisioner/pod.yaml: (1.182068795s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-615000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3ab466f8-b52f-4bfc-912f-9fd1cc295760] Pending
helpers_test.go:344: "sp-pod" [3ab466f8-b52f-4bfc-912f-9fd1cc295760] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3ab466f8-b52f-4bfc-912f-9fd1cc295760] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.015094251s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-615000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (32.01s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh -n functional-615000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 cp functional-615000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd2602403785/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh -n functional-615000 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-615000 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-s9pb9" [e6f70b2a-de6f-4e83-b596-64785e685647] Pending
helpers_test.go:344: "mysql-888f84dd9-s9pb9" [e6f70b2a-de6f-4e83-b596-64785e685647] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-s9pb9" [e6f70b2a-de6f-4e83-b596-64785e685647] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.072677664s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-615000 exec mysql-888f84dd9-s9pb9 -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-615000 exec mysql-888f84dd9-s9pb9 -- mysql -ppassword -e "show databases;": exit status 1 (233.120635ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-615000 exec mysql-888f84dd9-s9pb9 -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-615000 exec mysql-888f84dd9-s9pb9 -- mysql -ppassword -e "show databases;": exit status 1 (177.351788ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-615000 exec mysql-888f84dd9-s9pb9 -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-615000 exec mysql-888f84dd9-s9pb9 -- mysql -ppassword -e "show databases;": exit status 1 (109.421644ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-615000 exec mysql-888f84dd9-s9pb9 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/2057/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo cat /etc/test/nested/copy/2057/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/2057.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo cat /etc/ssl/certs/2057.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/2057.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo cat /usr/share/ca-certificates/2057.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/20572.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo cat /etc/ssl/certs/20572.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/20572.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo cat /usr/share/ca-certificates/20572.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-615000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-615000 ssh "sudo systemctl is-active crio": exit status 1 (591.173689ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 version -o=json --components
functional_test.go:2235: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 version -o=json --components: (1.015535141s)
--- PASS: TestFunctional/parallel/Version/components (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-615000 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.6
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-615000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-615000
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-615000 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/library/minikube-local-cache-test | functional-615000 | 3e38dbfca6e9b | 30B    |
| docker.io/library/nginx                     | alpine            | 2bc7edbc3cf2f | 40.7MB |
| docker.io/library/nginx                     | latest            | 3f8a00f137a0d | 142MB  |
| docker.io/library/mysql                     | 5.7               | be16cf2d832a9 | 455MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| gcr.io/google-containers/addon-resizer      | functional-615000 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.26.1           | deb04688c4a35 | 134MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.1           | e9c08e11b07f6 | 124MB  |
| registry.k8s.io/kube-proxy                  | v1.26.1           | 46a6bb3c77ce0 | 65.6MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/kube-scheduler              | v1.26.1           | 655493523f607 | 56.3MB |
| registry.k8s.io/etcd                        | 3.5.6-0           | fce326961ae2d | 299MB  |
| registry.k8s.io/pause                       | 3.6               | 6270bb605e12e | 683kB  |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-615000 image ls --format json:
[{"id":"3e38dbfca6e9b73bd22184368648e34ea1a219827c91e18832c92ed03ec9a718","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-615000"],"size":"30"},{"id":"be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"455000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":[]
,"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"134000000"},{"id":"e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"124000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.6"],"size":"683000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"40700000"},{"id":"46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"s
ize":"65599999"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"56300000"},{"id":"fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"299000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-615000"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"82e4c8a736a4fcf22b5ef
9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-615000 image ls --format yaml:
- id: 2bc7edbc3cf2fce630a95d0586c48cd248e5df37df5b1244728a5c8c91becfe0
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "40700000"
- id: 3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-615000
size: "32900000"
- id: 3e38dbfca6e9b73bd22184368648e34ea1a219827c91e18832c92ed03ec9a718
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-615000
size: "30"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.6
size: "683000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "56300000"
- id: e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "124000000"
- id: 46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "65599999"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: be16cf2d832a9a54ce42144e25f5ae7cc66bccf0e003837e7b5eb1a455dc742b
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "455000000"
- id: fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "299000000"
- id: deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "134000000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-615000 ssh pgrep buildkitd: exit status 1 (412.58921ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image build -t localhost/my-image:functional-615000 testdata/build
2023/02/23 12:43:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 image build -t localhost/my-image:functional-615000 testdata/build: (3.518827541s)
functional_test.go:317: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-615000 image build -t localhost/my-image:functional-615000 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 7d69ede7245d
Removing intermediate container 7d69ede7245d
---> 182c708f7cbc
Step 3/3 : ADD content.txt /
---> e245e99249d4
Successfully built e245e99249d4
Successfully tagged localhost/my-image:functional-615000
functional_test.go:320: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-615000 image build -t localhost/my-image:functional-615000 testdata/build:
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.810142731s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-615000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.89s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:493: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-615000 docker-env) && out/minikube-darwin-amd64 status -p functional-615000"
functional_test.go:493: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-615000 docker-env) && out/minikube-darwin-amd64 status -p functional-615000": (1.172362458s)
functional_test.go:516: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-615000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image load --daemon gcr.io/google-containers/addon-resizer:functional-615000
functional_test.go:352: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 image load --daemon gcr.io/google-containers/addon-resizer:functional-615000: (3.46861371s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image load --daemon gcr.io/google-containers/addon-resizer:functional-615000
functional_test.go:362: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 image load --daemon gcr.io/google-containers/addon-resizer:functional-615000: (2.268771198s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E0223 12:41:50.781031    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.439333211s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-615000
functional_test.go:242: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image load --daemon gcr.io/google-containers/addon-resizer:functional-615000
functional_test.go:242: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 image load --daemon gcr.io/google-containers/addon-resizer:functional-615000: (3.967277475s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image save gcr.io/google-containers/addon-resizer:functional-615000 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 image save gcr.io/google-containers/addon-resizer:functional-615000 /Users/jenkins/workspace/addon-resizer-save.tar: (2.080633626s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image rm gcr.io/google-containers/addon-resizer:functional-615000
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 image load /Users/jenkins/workspace/addon-resizer-save.tar: (2.005194382s)
functional_test.go:445: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-615000
functional_test.go:421: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 image save --daemon gcr.io/google-containers/addon-resizer:functional-615000
functional_test.go:421: (dbg) Done: out/minikube-darwin-amd64 -p functional-615000 image save --daemon gcr.io/google-containers/addon-resizer:functional-615000: (2.551578885s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-615000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-615000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-615000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [f048c95d-06ae-44e8-9c71-014ab5008e40] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [f048c95d-06ae-44e8-9c71-014ab5008e40] Running
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.008438845s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 service list -o json
functional_test.go:1552: Took "708.45619ms" to run "out/minikube-darwin-amd64 -p functional-615000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-615000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-615000 tunnel --alsologtostderr] ...
helpers_test.go:502: unable to terminate pid 4341: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1312: Took "407.762055ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1326: Took "68.633887ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1363: Took "408.527466ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1376: Took "67.65456ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (12.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-615000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3943313415/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677184954836942000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3943313415/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677184954836942000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3943313415/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677184954836942000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3943313415/001/test-1677184954836942000
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-615000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.525529ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 23 20:42 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 23 20:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 23 20:42 test-1677184954836942000
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh cat /mount-9p/test-1677184954836942000
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-615000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [628ccc19-e460-46e3-be8b-48a3e8e6f7e1] Pending
helpers_test.go:344: "busybox-mount" [628ccc19-e460-46e3-be8b-48a3e8e6f7e1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [628ccc19-e460-46e3-be8b-48a3e8e6f7e1] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [628ccc19-e460-46e3-be8b-48a3e8e6f7e1] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 9.007590063s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-615000 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-615000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3943313415/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (12.65s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-615000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port685361645/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-615000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (378.981032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-615000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port685361645/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-darwin-amd64 -p functional-615000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-615000 ssh "sudo umount -f /mount-9p": exit status 1 (389.681259ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-darwin-amd64 -p functional-615000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-615000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port685361645/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.49s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-615000
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-615000
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-615000
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.4s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:73: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-260000
image_test.go:73: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-260000: (2.398616046s)
--- PASS: TestImageBuild/serial/NormalBuild (2.40s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:94: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-260000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-260000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.47s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-260000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.41s)

                                                
                                    
x
+
TestJSONOutput/start/Command (46.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-112000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E0223 12:51:46.561938    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
E0223 12:52:14.256820    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-112000 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (46.94260836s)
--- PASS: TestJSONOutput/start/Command (46.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-112000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-112000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.86s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-112000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-112000 --output=json --user=testUser: (10.858212359s)
--- PASS: TestJSONOutput/stop/Command (10.86s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.76s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-765000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-765000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (375.046896ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"91514297-b6a4-43d9-bb6a-80b96059336a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-765000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9add2390-5e8a-43e8-89b1-9ef4df4bf208","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"b4c9622c-f1d0-4a0f-a0dd-4076e1344761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig"}}
	{"specversion":"1.0","id":"69583d3d-5599-4c39-a385-cfe4708d5af6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"6d434932-7c2c-481d-ad1a-7ce339559935","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"50ea66a3-56ff-4737-8ba6-bd926130f1bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube"}}
	{"specversion":"1.0","id":"61b6b4db-9f1b-43bd-b8eb-cc8134580ebb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4d84eb99-c90f-4cc7-827b-2f628f53f43e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-765000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-765000
--- PASS: TestErrorJSONOutput (0.76s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.34s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-235000 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-235000 --network=: (32.612179705s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-235000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-235000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-235000: (2.669609006s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.34s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.82s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-968000 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-968000 --network=bridge: (27.395170708s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-968000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-968000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-968000: (2.370804596s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.82s)

                                                
                                    
x
+
TestKicExistingNetwork (29.81s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-801000 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-801000 --network=existing-network: (27.042748371s)
helpers_test.go:175: Cleaning up "existing-network-801000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-801000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-801000: (2.413045414s)
--- PASS: TestKicExistingNetwork (29.81s)

                                                
                                    
x
+
TestKicCustomSubnet (30.8s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-337000 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-337000 --subnet=192.168.60.0/24: (28.140080548s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-337000 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-337000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-337000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-337000: (2.603857849s)
--- PASS: TestKicCustomSubnet (30.80s)

                                                
                                    
x
+
TestKicStaticIP (31.19s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 start -p static-ip-288000 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-darwin-amd64 start -p static-ip-288000 --static-ip=192.168.200.200: (28.354422224s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-darwin-amd64 -p static-ip-288000 ip
helpers_test.go:175: Cleaning up "static-ip-288000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p static-ip-288000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p static-ip-288000: (2.601398781s)
--- PASS: TestKicStaticIP (31.19s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (67.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-186000 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-186000 --driver=docker : (32.540021504s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-188000 --driver=docker 
E0223 12:56:09.897761    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-188000 --driver=docker : (28.637057539s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-186000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-188000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-188000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-188000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-188000: (2.365110884s)
helpers_test.go:175: Cleaning up "first-186000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-186000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-186000: (2.619940091s)
--- PASS: TestMinikubeProfile (67.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-354000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-354000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (7.171018542s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-354000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.1s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-367000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-367000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (7.098711152s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-367000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-354000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-354000 --alsologtostderr -v=5: (2.116756377s)
--- PASS: TestMountStart/serial/DeleteFirst (2.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-367000 ssh -- ls /minikube-host
E0223 12:56:46.569251    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.39s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.6s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-367000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-367000: (1.595726962s)
--- PASS: TestMountStart/serial/Stop (1.60s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.95s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-367000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-367000: (4.951532995s)
--- PASS: TestMountStart/serial/RestartStopped (5.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-367000 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (90.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-899000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E0223 12:57:32.946402    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-899000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m29.798807008s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (90.64s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (21.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-899000 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-899000 -v 3 --alsologtostderr: (20.486091629s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr
multinode_test.go:114: (dbg) Done: out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr: (1.131098516s)
--- PASS: TestMultiNode/serial/AddNode (21.62s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp testdata/cp-test.txt multinode-899000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile413386721/001/cp-test_multinode-899000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000:/home/docker/cp-test.txt multinode-899000-m02:/home/docker/cp-test_multinode-899000_multinode-899000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m02 "sudo cat /home/docker/cp-test_multinode-899000_multinode-899000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000:/home/docker/cp-test.txt multinode-899000-m03:/home/docker/cp-test_multinode-899000_multinode-899000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m03 "sudo cat /home/docker/cp-test_multinode-899000_multinode-899000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp testdata/cp-test.txt multinode-899000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile413386721/001/cp-test_multinode-899000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000-m02:/home/docker/cp-test.txt multinode-899000:/home/docker/cp-test_multinode-899000-m02_multinode-899000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000 "sudo cat /home/docker/cp-test_multinode-899000-m02_multinode-899000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000-m02:/home/docker/cp-test.txt multinode-899000-m03:/home/docker/cp-test_multinode-899000-m02_multinode-899000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m03 "sudo cat /home/docker/cp-test_multinode-899000-m02_multinode-899000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp testdata/cp-test.txt multinode-899000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile413386721/001/cp-test_multinode-899000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000-m03:/home/docker/cp-test.txt multinode-899000:/home/docker/cp-test_multinode-899000-m03_multinode-899000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000 "sudo cat /home/docker/cp-test_multinode-899000-m03_multinode-899000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 cp multinode-899000-m03:/home/docker/cp-test.txt multinode-899000-m02:/home/docker/cp-test_multinode-899000-m03_multinode-899000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 ssh -n multinode-899000-m02 "sudo cat /home/docker/cp-test_multinode-899000-m03_multinode-899000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.49s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-899000 node stop m03: (1.506246823s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-899000 status: exit status 7 (742.321954ms)

                                                
                                                
-- stdout --
	multinode-899000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-899000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-899000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr: exit status 7 (743.091463ms)

                                                
                                                
-- stdout --
	multinode-899000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-899000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-899000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 12:59:20.480929    8564 out.go:296] Setting OutFile to fd 1 ...
	I0223 12:59:20.481099    8564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:59:20.481104    8564 out.go:309] Setting ErrFile to fd 2...
	I0223 12:59:20.481108    8564 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 12:59:20.481213    8564 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 12:59:20.481400    8564 out.go:303] Setting JSON to false
	I0223 12:59:20.481424    8564 mustload.go:65] Loading cluster: multinode-899000
	I0223 12:59:20.481482    8564 notify.go:220] Checking for updates...
	I0223 12:59:20.481694    8564 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 12:59:20.481709    8564 status.go:255] checking status of multinode-899000 ...
	I0223 12:59:20.482092    8564 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 12:59:20.540164    8564 status.go:330] multinode-899000 host status = "Running" (err=<nil>)
	I0223 12:59:20.540196    8564 host.go:66] Checking if "multinode-899000" exists ...
	I0223 12:59:20.540455    8564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000
	I0223 12:59:20.596825    8564 host.go:66] Checking if "multinode-899000" exists ...
	I0223 12:59:20.597720    8564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 12:59:20.597823    8564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:59:20.655637    8564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51100 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000/id_rsa Username:docker}
	I0223 12:59:20.745931    8564 ssh_runner.go:195] Run: systemctl --version
	I0223 12:59:20.750511    8564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:59:20.760182    8564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-899000
	I0223 12:59:20.818875    8564 kubeconfig.go:92] found "multinode-899000" server: "https://127.0.0.1:51104"
	I0223 12:59:20.818900    8564 api_server.go:165] Checking apiserver status ...
	I0223 12:59:20.818958    8564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 12:59:20.828995    8564 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1885/cgroup
	W0223 12:59:20.836819    8564 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1885/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0223 12:59:20.836881    8564 ssh_runner.go:195] Run: ls
	I0223 12:59:20.840543    8564 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:51104/healthz ...
	I0223 12:59:20.846091    8564 api_server.go:278] https://127.0.0.1:51104/healthz returned 200:
	ok
	I0223 12:59:20.846105    8564 status.go:421] multinode-899000 apiserver status = Running (err=<nil>)
	I0223 12:59:20.846122    8564 status.go:257] multinode-899000 status: &{Name:multinode-899000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 12:59:20.846134    8564 status.go:255] checking status of multinode-899000-m02 ...
	I0223 12:59:20.846392    8564 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Status}}
	I0223 12:59:20.902796    8564 status.go:330] multinode-899000-m02 host status = "Running" (err=<nil>)
	I0223 12:59:20.902824    8564 host.go:66] Checking if "multinode-899000-m02" exists ...
	I0223 12:59:20.903116    8564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-899000-m02
	I0223 12:59:20.960560    8564 host.go:66] Checking if "multinode-899000-m02" exists ...
	I0223 12:59:20.960822    8564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 12:59:20.960875    8564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-899000-m02
	I0223 12:59:21.018106    8564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:51172 SSHKeyPath:/Users/jenkins/minikube-integration/15909-825/.minikube/machines/multinode-899000-m02/id_rsa Username:docker}
	I0223 12:59:21.109108    8564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 12:59:21.118674    8564 status.go:257] multinode-899000-m02 status: &{Name:multinode-899000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0223 12:59:21.118699    8564 status.go:255] checking status of multinode-899000-m03 ...
	I0223 12:59:21.118959    8564 cli_runner.go:164] Run: docker container inspect multinode-899000-m03 --format={{.State.Status}}
	I0223 12:59:21.178810    8564 status.go:330] multinode-899000-m03 host status = "Stopped" (err=<nil>)
	I0223 12:59:21.178835    8564 status.go:343] host is not running, skipping remaining checks
	I0223 12:59:21.178845    8564 status.go:257] multinode-899000-m03 status: &{Name:multinode-899000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-899000 node start m03 --alsologtostderr: (8.922743514s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-899000
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-899000
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-899000: (22.92406303s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-899000 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-899000 --wait=true -v=8 --alsologtostderr: (1m3.700937079s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-899000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (6.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-899000 node delete m03: (5.266399152s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (6.15s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 stop
E0223 13:01:09.904726    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-899000 stop: (21.562963623s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-899000 status: exit status 7 (159.391448ms)

                                                
                                                
-- stdout --
	multinode-899000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-899000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr: exit status 7 (154.999962ms)

                                                
                                                
-- stdout --
	multinode-899000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-899000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 13:01:25.822984    9119 out.go:296] Setting OutFile to fd 1 ...
	I0223 13:01:25.823145    9119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:01:25.823150    9119 out.go:309] Setting ErrFile to fd 2...
	I0223 13:01:25.823154    9119 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 13:01:25.823273    9119 root.go:336] Updating PATH: /Users/jenkins/minikube-integration/15909-825/.minikube/bin
	I0223 13:01:25.823457    9119 out.go:303] Setting JSON to false
	I0223 13:01:25.823482    9119 mustload.go:65] Loading cluster: multinode-899000
	I0223 13:01:25.823534    9119 notify.go:220] Checking for updates...
	I0223 13:01:25.823779    9119 config.go:182] Loaded profile config "multinode-899000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.26.1
	I0223 13:01:25.823793    9119 status.go:255] checking status of multinode-899000 ...
	I0223 13:01:25.824167    9119 cli_runner.go:164] Run: docker container inspect multinode-899000 --format={{.State.Status}}
	I0223 13:01:25.878551    9119 status.go:330] multinode-899000 host status = "Stopped" (err=<nil>)
	I0223 13:01:25.878568    9119 status.go:343] host is not running, skipping remaining checks
	I0223 13:01:25.878573    9119 status.go:257] multinode-899000 status: &{Name:multinode-899000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 13:01:25.878604    9119 status.go:255] checking status of multinode-899000-m02 ...
	I0223 13:01:25.878849    9119 cli_runner.go:164] Run: docker container inspect multinode-899000-m02 --format={{.State.Status}}
	I0223 13:01:25.933333    9119 status.go:330] multinode-899000-m02 host status = "Stopped" (err=<nil>)
	I0223 13:01:25.933366    9119 status.go:343] host is not running, skipping remaining checks
	I0223 13:01:25.933377    9119 status.go:257] multinode-899000-m02 status: &{Name:multinode-899000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.88s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-899000 --wait=true -v=8 --alsologtostderr --driver=docker 
E0223 13:01:46.574019    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-899000 --wait=true -v=8 --alsologtostderr --driver=docker : (54.244236831s)
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-899000 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-899000
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-899000-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-899000-m02 --driver=docker : exit status 14 (386.056239ms)

                                                
                                                
-- stdout --
	* [multinode-899000-m02] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-899000-m02' is duplicated with machine name 'multinode-899000-m02' in profile 'multinode-899000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-899000-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-899000-m03 --driver=docker : (29.50378446s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-899000
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-899000: exit status 80 (469.961457ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-899000
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-899000-m03 already exists in multinode-899000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-899000-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-899000-m03: (2.392804934s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.80s)

                                                
                                    
x
+
TestPreload (135.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-365000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E0223 13:03:09.630989    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-365000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (1m7.256942369s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-365000 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-365000 -- docker pull gcr.io/k8s-minikube/busybox: (2.56510776s)
preload_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-365000
preload_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-365000: (10.822793387s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-365000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker 
preload_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-365000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker : (51.535299524s)
preload_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-365000 -- docker images
helpers_test.go:175: Cleaning up "test-preload-365000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-365000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-365000: (2.672368092s)
--- PASS: TestPreload (135.26s)

                                                
                                    
x
+
TestScheduledStopUnix (101.75s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-489000 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-489000 --memory=2048 --driver=docker : (27.585124177s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-489000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-489000 -n scheduled-stop-489000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-489000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-489000 --cancel-scheduled
E0223 13:06:09.909539    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/addons-401000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-489000 -n scheduled-stop-489000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-489000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-489000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0223 13:06:46.580390    2057 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/functional-615000/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-489000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-489000: exit status 7 (104.252234ms)

                                                
                                                
-- stdout --
	scheduled-stop-489000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-489000 -n scheduled-stop-489000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-489000 -n scheduled-stop-489000: exit status 7 (101.705766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-489000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-489000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-489000: (2.290121005s)
--- PASS: TestScheduledStopUnix (101.75s)

                                                
                                    
x
+
TestSkaffold (67.32s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4208306425 version
skaffold_test.go:63: skaffold version: v2.1.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-719000 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-719000 --memory=2600 --driver=docker : (32.455840296s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4208306425 run --minikube-profile skaffold-719000 --kube-context skaffold-719000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe4208306425 run --minikube-profile skaffold-719000 --kube-context skaffold-719000 --status-check=true --port-forward=false --interactive=false: (18.103481767s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5748b9dd5b-ljgrf" [f0c8b5d7-cc58-4dc2-abf7-1df52ec214f9] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.013760444s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7db6dcc68d-grhn7" [8c3b8832-129d-4f77-9570-15ff26609681] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009377453s
helpers_test.go:175: Cleaning up "skaffold-719000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-719000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-719000: (2.890363637s)
--- PASS: TestSkaffold (67.32s)

                                                
                                    
x
+
TestInsufficientStorage (14.86s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-866000 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-866000 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (11.734860659s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"8648d653-b90f-4eb2-a345-b3889f239860","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-866000] minikube v1.29.0 on Darwin 13.2","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c481f9e2-e9a6-4995-aab6-d266afe3e66a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"9c72a753-062f-4bea-a8d7-c56b9acd7f5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig"}}
	{"specversion":"1.0","id":"ab9827a1-7262-438a-8d95-448a280557dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"9f6d698b-8394-4b3b-b999-e1299bfcbeb4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5b564be7-2290-4ad4-be15-930ae5058960","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube"}}
	{"specversion":"1.0","id":"c80ac9f4-dbe2-409d-aed4-c3a55ebb5b99","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9520c7cb-6775-484e-850d-ef4b70feb496","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b87f5e1e-5f58-4129-9e0f-e4aa34072668","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ff3d77ff-63ba-48b5-a14e-7c2d16cebce1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"4942d7bd-516a-4604-92e8-3b80490b1350","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"32235ff9-2b07-4bf2-b62e-b121e3b747be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-866000 in cluster insufficient-storage-866000","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bb6e5605-11f6-4c50-9506-5a639f9aa128","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6023af1f-2568-4c7b-992b-cf8051ecba46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1020d928-b117-4c3d-83fa-ec60c78e7448","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-866000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-866000 --output=json --layout=cluster: exit status 7 (382.042406ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-866000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-866000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:08:19.363432   10956 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-866000" does not appear in /Users/jenkins/minikube-integration/15909-825/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-866000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-866000 --output=json --layout=cluster: exit status 7 (385.417996ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-866000","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-866000","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0223 13:08:19.749357   10966 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-866000" does not appear in /Users/jenkins/minikube-integration/15909-825/kubeconfig
	E0223 13:08:19.758237   10966 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15909-825/.minikube/profiles/insufficient-storage-866000/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-866000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-866000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-866000: (2.360862581s)
--- PASS: TestInsufficientStorage (14.86s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (18.52s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3214193419/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3214193419/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3214193419/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3214193419/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (18.52s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (27.1s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.29.0 on darwin
- MINIKUBE_LOCATION=15909
- KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current229522416/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current229522416/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current229522416/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current229522416/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (27.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (4.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (4.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-413000 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-413000 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (406.692827ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-413000] minikube v1.29.0 on Darwin 13.2
	  - MINIKUBE_LOCATION=15909
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15909-825/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15909-825/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-413000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-413000 "sudo systemctl is-active --quiet service kubelet": exit status 80 (191.865967ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-413000": docker container inspect NoKubernetes-413000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-413000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_a637006dfde1245e93469fe3227a30492e7a4c9f_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (10.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (6.482036715s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-darwin-amd64 profile list --output=json: (4.246932949s)
--- PASS: TestNoKubernetes/serial/ProfileList (10.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-413000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-413000 "sudo systemctl is-active --quiet service kubelet": exit status 80 (191.845444ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: state: unknown state "NoKubernetes-413000": docker container inspect NoKubernetes-413000 --format=: exit status 1
	stdout:
	
	
	stderr:
	Error: No such container: NoKubernetes-413000
	
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                        │
	│    * If the above advice does not help, please let us know:                                                            │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                          │
	│                                                                                                                        │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                               │
	│    * Please also attach the following file to the GitHub issue:                                                        │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_ssh_a637006dfde1245e93469fe3227a30492e7a4c9f_0.log    │
	│                                                                                                                        │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-767000 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    

Test skip (18/253)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 9.857706ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-l2w7n" [9d36e011-2197-46ae-b146-8896368f2678] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.009149462s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-48g4p" [2cca7482-a01f-44a7-b4f0-178b8e5cae2d] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010397789s
addons_test.go:305: (dbg) Run:  kubectl --context addons-401000 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-401000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-401000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.853246344s)
addons_test.go:320: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.97s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-401000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-401000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-401000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [156cdd64-4921-4eb4-9667-294ef23d1718] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [156cdd64-4921-4eb4-9667-294ef23d1718] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.041042964s
addons_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 -p addons-401000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:247: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (10.30s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-615000 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-615000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-v8j6z" [5c2db24a-2535-45de-8655-843abcba700f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-v8j6z" [5c2db24a-2535-45de-8655-843abcba700f] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.006405385s
functional_test.go:1614: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (7.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:109: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-235000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-235000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-235000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-235000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-235000"

                                                
                                                
----------------------- debugLogs end: cilium-235000 [took: 5.39713147s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-235000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-235000
--- SKIP: TestNetworkPlugins/group/cilium (5.91s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-733000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-733000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.44s)

                                                
                                    
Copied to clipboard