Test Report: Docker_Linux_containerd 18859

                    
                      5bbb68fdb343a4fd0bac66b69dd2693514a1fa6d:2024-07-03:35168
                    
                

Test fail (1/328)

Order failed test Duration
253 TestMissingContainerUpgrade 1413.02
x
+
TestMissingContainerUpgrade (1413.02s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.297149768 start -p missing-upgrade-167387 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.297149768 start -p missing-upgrade-167387 --memory=2200 --driver=docker  --container-runtime=containerd: (1m31.788793965s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-167387
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-167387: (11.070540956s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-167387
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-167387 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p missing-upgrade-167387 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: exit status 80 (21m37.460539477s)

                                                
                                                
-- stdout --
	* [missing-upgrade-167387] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18859
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	* Using the docker driver based on existing profile
	* Starting "missing-upgrade-167387" primary control-plane node in "missing-upgrade-167387" cluster
	* Pulling base image v0.0.44-1719972989-19184 ...
	* docker "missing-upgrade-167387" container is missing, will recreate.
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Updating the running docker "missing-upgrade-167387" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:34:26.447313  231294 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:34:26.447405  231294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:34:26.447413  231294 out.go:304] Setting ErrFile to fd 2...
	I0703 23:34:26.447417  231294 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:34:26.447579  231294 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:34:26.448080  231294 out.go:298] Setting JSON to false
	I0703 23:34:26.449130  231294 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4608,"bootTime":1720045058,"procs":257,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:34:26.449191  231294 start.go:139] virtualization: kvm guest
	I0703 23:34:26.451185  231294 out.go:177] * [missing-upgrade-167387] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:34:26.452276  231294 out.go:177]   - MINIKUBE_LOCATION=18859
	I0703 23:34:26.452385  231294 notify.go:220] Checking for updates...
	I0703 23:34:26.454797  231294 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:34:26.456123  231294 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	I0703 23:34:26.457240  231294 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	I0703 23:34:26.458254  231294 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:34:26.459508  231294 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:34:26.461091  231294 config.go:182] Loaded profile config "missing-upgrade-167387": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0703 23:34:26.462709  231294 out.go:177] * Kubernetes 1.30.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.2
	I0703 23:34:26.463926  231294 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:34:26.494163  231294 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0703 23:34:26.494299  231294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:34:26.549742  231294 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:63 SystemTime:2024-07-03 23:34:26.539544666 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:34:26.549876  231294 docker.go:295] overlay module found
	I0703 23:34:26.552230  231294 out.go:177] * Using the docker driver based on existing profile
	I0703 23:34:26.553445  231294 start.go:297] selected driver: docker
	I0703 23:34:26.553462  231294 start.go:901] validating driver "docker" against &{Name:missing-upgrade-167387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:missing-upgrade-167387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwareP
ath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0703 23:34:26.553530  231294 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:34:26.554341  231294 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:34:26.613104  231294 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:63 SystemTime:2024-07-03 23:34:26.603110357 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:34:26.613453  231294 cni.go:84] Creating CNI manager for ""
	I0703 23:34:26.613472  231294 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0703 23:34:26.613510  231294 start.go:340] cluster config:
	{Name:missing-upgrade-167387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:missing-upgrade-167387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containe
rd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:0s}
	I0703 23:34:26.615308  231294 out.go:177] * Starting "missing-upgrade-167387" primary control-plane node in "missing-upgrade-167387" cluster
	I0703 23:34:26.616590  231294 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0703 23:34:26.617856  231294 out.go:177] * Pulling base image v0.0.44-1719972989-19184 ...
	I0703 23:34:26.618922  231294 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime containerd
	I0703 23:34:26.618968  231294 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18859-12140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-containerd-overlay2-amd64.tar.lz4
	I0703 23:34:26.618981  231294 cache.go:56] Caching tarball of preloaded images
	I0703 23:34:26.619048  231294 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local docker daemon
	I0703 23:34:26.619066  231294 preload.go:173] Found /home/jenkins/minikube-integration/18859-12140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0703 23:34:26.619076  231294 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on containerd
	I0703 23:34:26.619193  231294 profile.go:143] Saving config to /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/missing-upgrade-167387/config.json ...
	I0703 23:34:26.638890  231294 image.go:83] Found gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 in local docker daemon, skipping pull
	I0703 23:34:26.638943  231294 cache.go:144] gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 exists in daemon, skipping load
	I0703 23:34:26.638989  231294 cache.go:194] Successfully downloaded all kic artifacts
	I0703 23:34:26.639047  231294 start.go:360] acquireMachinesLock for missing-upgrade-167387: {Name:mk749d88d79f2026262246b645af9fa820f61ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:34:26.639134  231294 start.go:364] duration metric: took 56.378µs to acquireMachinesLock for "missing-upgrade-167387"
	I0703 23:34:26.639160  231294 start.go:96] Skipping create...Using existing machine configuration
	I0703 23:34:26.639169  231294 fix.go:54] fixHost starting: 
	I0703 23:34:26.639528  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:26.660516  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:26.660595  231294 fix.go:112] recreateIfNeeded on missing-upgrade-167387: state= err=unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:26.660621  231294 fix.go:117] machineExists: false. err=machine does not exist
	I0703 23:34:26.662499  231294 out.go:177] * docker "missing-upgrade-167387" container is missing, will recreate.
	I0703 23:34:26.663603  231294 delete.go:124] DEMOLISHING missing-upgrade-167387 ...
	I0703 23:34:26.663677  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:26.681087  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	W0703 23:34:26.681131  231294 stop.go:83] unable to get state: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:26.681146  231294 delete.go:128] stophost failed (probably ok): ssh power off: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:26.681484  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:26.698874  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:26.698947  231294 delete.go:82] Unable to get host status for missing-upgrade-167387, assuming it has already been deleted: state: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:26.699009  231294 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-167387
	W0703 23:34:26.716839  231294 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-167387 returned with exit code 1
	I0703 23:34:26.716884  231294 kic.go:371] could not find the container missing-upgrade-167387 to remove it. will try anyways
	I0703 23:34:26.716934  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:26.735054  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	W0703 23:34:26.735123  231294 oci.go:84] error getting container status, will try to delete anyways: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:26.735190  231294 cli_runner.go:164] Run: docker exec --privileged -t missing-upgrade-167387 /bin/bash -c "sudo init 0"
	W0703 23:34:26.751345  231294 cli_runner.go:211] docker exec --privileged -t missing-upgrade-167387 /bin/bash -c "sudo init 0" returned with exit code 1
	I0703 23:34:26.751386  231294 oci.go:650] error shutdown missing-upgrade-167387: docker exec --privileged -t missing-upgrade-167387 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:27.751517  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:27.772396  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:27.772472  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:27.772486  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:27.772528  231294 retry.go:31] will retry after 259.213901ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:28.031931  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:28.049499  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:28.049587  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:28.049604  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:28.049638  231294 retry.go:31] will retry after 443.587912ms: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:28.494329  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:28.511058  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:28.511114  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:28.511124  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:28.511147  231294 retry.go:31] will retry after 1.109898066s: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:29.621973  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:29.638086  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:29.638136  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:29.638144  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:29.638169  231294 retry.go:31] will retry after 1.106512276s: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:30.744887  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:30.765211  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:30.765269  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:30.765283  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:30.765449  231294 retry.go:31] will retry after 2.560756211s: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:33.327793  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:33.345122  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:33.345192  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:33.345206  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:33.345242  231294 retry.go:31] will retry after 2.41948537s: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:35.764874  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:35.780889  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:35.780973  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:35.780989  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:35.781027  231294 retry.go:31] will retry after 4.467853562s: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:40.249115  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:40.264648  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:40.264713  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:40.264736  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:40.264795  231294 retry.go:31] will retry after 6.401033399s: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:46.666582  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	W0703 23:34:46.682302  231294 cli_runner.go:211] docker container inspect missing-upgrade-167387 --format={{.State.Status}} returned with exit code 1
	I0703 23:34:46.682364  231294 oci.go:662] temporary error verifying shutdown: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	I0703 23:34:46.682376  231294 oci.go:664] temporary error: container missing-upgrade-167387 status is  but expect it to be exited
	I0703 23:34:46.682409  231294 oci.go:88] couldn't shut down missing-upgrade-167387 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-167387": docker container inspect missing-upgrade-167387 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-167387
	 
	I0703 23:34:46.682456  231294 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-167387
	I0703 23:34:46.698288  231294 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-167387
	W0703 23:34:46.712532  231294 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-167387 returned with exit code 1
	I0703 23:34:46.712634  231294 cli_runner.go:164] Run: docker network inspect missing-upgrade-167387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 23:34:46.727570  231294 cli_runner.go:164] Run: docker network rm missing-upgrade-167387
	I0703 23:34:47.101318  231294 fix.go:124] Sleeping 1 second for extra luck!
	I0703 23:34:48.101910  231294 start.go:125] createHost starting for "" (driver="docker")
	I0703 23:34:48.104054  231294 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0703 23:34:48.104199  231294 start.go:159] libmachine.API.Create for "missing-upgrade-167387" (driver="docker")
	I0703 23:34:48.104233  231294 client.go:168] LocalClient.Create starting
	I0703 23:34:48.104316  231294 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem
	I0703 23:34:48.104363  231294 main.go:141] libmachine: Decoding PEM data...
	I0703 23:34:48.104394  231294 main.go:141] libmachine: Parsing certificate...
	I0703 23:34:48.104474  231294 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem
	I0703 23:34:48.104510  231294 main.go:141] libmachine: Decoding PEM data...
	I0703 23:34:48.104528  231294 main.go:141] libmachine: Parsing certificate...
	I0703 23:34:48.104807  231294 cli_runner.go:164] Run: docker network inspect missing-upgrade-167387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0703 23:34:48.122901  231294 cli_runner.go:211] docker network inspect missing-upgrade-167387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0703 23:34:48.122967  231294 network_create.go:284] running [docker network inspect missing-upgrade-167387] to gather additional debugging logs...
	I0703 23:34:48.122988  231294 cli_runner.go:164] Run: docker network inspect missing-upgrade-167387
	W0703 23:34:48.138023  231294 cli_runner.go:211] docker network inspect missing-upgrade-167387 returned with exit code 1
	I0703 23:34:48.138047  231294 network_create.go:287] error running [docker network inspect missing-upgrade-167387]: docker network inspect missing-upgrade-167387: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-167387 not found
	I0703 23:34:48.138057  231294 network_create.go:289] output of [docker network inspect missing-upgrade-167387]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-167387 not found
	
	** /stderr **
	I0703 23:34:48.138166  231294 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0703 23:34:48.170675  231294 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f9dc26532a27 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:27:e6:61:01} reservation:<nil>}
	E0703 23:34:48.201630  231294 network_create.go:103] failed to find free subnet for docker network missing-upgrade-167387 after 20 attempts: failed listing network interface addresses: route ip+net: no such network interface
	W0703 23:34:48.201751  231294 out.go:239] ! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: failed listing network interface addresses: route ip+net: no such network interface
	! Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: failed listing network interface addresses: route ip+net: no such network interface
	I0703 23:34:48.201819  231294 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0703 23:34:48.231912  231294 cli_runner.go:164] Run: docker volume create missing-upgrade-167387 --label name.minikube.sigs.k8s.io=missing-upgrade-167387 --label created_by.minikube.sigs.k8s.io=true
	I0703 23:34:48.255790  231294 oci.go:103] Successfully created a docker volume missing-upgrade-167387
	I0703 23:34:48.255880  231294 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-167387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-167387 --entrypoint /usr/bin/test -v missing-upgrade-167387:/var gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -d /var/lib
	I0703 23:34:49.291155  231294 cli_runner.go:217] Completed: docker run --rm --name missing-upgrade-167387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-167387 --entrypoint /usr/bin/test -v missing-upgrade-167387:/var gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -d /var/lib: (1.035230138s)
	I0703 23:34:49.291186  231294 oci.go:107] Successfully prepared a docker volume missing-upgrade-167387
	I0703 23:34:49.291216  231294 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime containerd
	I0703 23:34:49.291246  231294 kic.go:194] Starting extracting preloaded images to volume ...
	I0703 23:34:49.291331  231294 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18859-12140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-167387:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -I lz4 -xf /preloaded.tar -C /extractDir
	I0703 23:34:55.433348  231294 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18859-12140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-167387:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 -I lz4 -xf /preloaded.tar -C /extractDir: (6.141974602s)
	I0703 23:34:55.433383  231294 kic.go:203] duration metric: took 6.142134091s to extract preloaded images to volume ...
	W0703 23:34:55.433534  231294 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0703 23:34:55.433676  231294 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0703 23:34:55.520928  231294 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-167387 --name missing-upgrade-167387 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-167387 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-167387 --volume missing-upgrade-167387:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95
	I0703 23:34:55.938615  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Running}}
	I0703 23:34:55.961110  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	I0703 23:34:56.004061  231294 cli_runner.go:164] Run: docker exec missing-upgrade-167387 stat /var/lib/dpkg/alternatives/iptables
	I0703 23:34:56.106603  231294 oci.go:144] the created container "missing-upgrade-167387" has a running status.
	I0703 23:34:56.106637  231294 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa...
	I0703 23:34:56.608181  231294 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0703 23:34:56.641198  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	I0703 23:34:56.663295  231294 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0703 23:34:56.663319  231294 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-167387 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0703 23:34:56.749262  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	I0703 23:34:56.769123  231294 machine.go:94] provisionDockerMachine start ...
	I0703 23:34:56.769214  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:34:56.791475  231294 main.go:141] libmachine: Using SSH client type: native
	I0703 23:34:56.791757  231294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0703 23:34:56.791779  231294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0703 23:37:27.817142  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0703 23:37:30.889121  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47964->127.0.0.1:33033: read: connection reset by peer
	I0703 23:37:36.969066  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47972->127.0.0.1:33033: read: connection reset by peer
	I0703 23:37:40.045076  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35232->127.0.0.1:33033: read: connection reset by peer
	I0703 23:37:46.121097  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35238->127.0.0.1:33033: read: connection reset by peer
	I0703 23:37:49.197129  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35248->127.0.0.1:33033: read: connection reset by peer
	I0703 23:37:55.273127  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48948->127.0.0.1:33033: read: connection reset by peer
	I0703 23:37:58.349065  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48952->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:04.425093  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45474->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:07.497109  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45488->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:13.577073  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55920->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:16.653057  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55932->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:22.729103  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55934->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:25.801048  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47446->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:31.881030  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47448->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:34.953120  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59880->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:41.033115  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59894->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:44.105112  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60376->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:50.189068  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60378->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:53.257015  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52796->127.0.0.1:33033: read: connection reset by peer
	I0703 23:38:59.341176  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52812->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:02.413064  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41850->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:08.489085  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41860->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:11.561099  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40086->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:17.645110  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40094->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:20.713137  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54638->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:26.793061  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54644->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:29.865092  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42174->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:35.949046  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42176->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:39.017135  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42182->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:45.097098  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45124->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:48.169091  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45134->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:54.249160  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58682->127.0.0.1:33033: read: connection reset by peer
	I0703 23:39:57.325097  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58690->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:03.401143  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48344->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:06.473135  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48350->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:12.553075  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48358->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:15.625151  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42870->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:21.709182  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42876->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:24.777082  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53276->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:30.857045  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53282->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:33.929047  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41974->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:40.009088  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41984->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:43.081119  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60444->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:48.104993  231294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:40:48.105053  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:40:48.122567  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:40:49.161159  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60446->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:49.161225  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60452->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:49.161253  231294 retry.go:31] will retry after 147.306185ms: ssh: handshake failed: read tcp 127.0.0.1:60452->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:52.237173  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60468->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:52.237177  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53046->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:52.237278  231294 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60468->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:52.237300  231294 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60468->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:52.237369  231294 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 23:40:52.237412  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:40:52.256649  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:40:55.305177  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53050->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:55.305256  231294 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53050->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:55.305270  231294 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53050->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:55.305275  231294 start.go:128] duration metric: took 6m7.203339471s to createHost
	I0703 23:40:55.305189  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53056->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:55.305365  231294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:40:55.305406  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:40:55.324544  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:40:58.377223  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53088->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:58.377299  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53072->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:58.377375  231294 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53072->127.0.0.1:33033: read: connection reset by peer
	W0703 23:40:58.377395  231294 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53072->127.0.0.1:33033: read: connection reset by peer
	I0703 23:40:58.377446  231294 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 23:40:58.377492  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:40:58.402383  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:41:01.449193  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53104->127.0.0.1:33033: read: connection reset by peer
	W0703 23:41:01.449273  231294 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53104->127.0.0.1:33033: read: connection reset by peer
	W0703 23:41:01.449291  231294 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53104->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:01.449297  231294 fix.go:56] duration metric: took 6m34.81012908s for fixHost
	I0703 23:41:01.449305  231294 start.go:83] releasing machines lock for "missing-upgrade-167387", held for 6m34.810159204s
	I0703 23:41:01.449200  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53152->127.0.0.1:33033: read: connection reset by peer
	W0703 23:41:01.449319  231294 start.go:714] error starting host: recreate: creating host: create host timed out in 360.000000 seconds
	W0703 23:41:01.449407  231294 out.go:239] ! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	! StartHost failed, but will try again: recreate: creating host: create host timed out in 360.000000 seconds
	I0703 23:41:01.449422  231294 start.go:729] Will try again in 5 seconds ...
	I0703 23:41:04.521126  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53162->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:06.450001  231294 start.go:360] acquireMachinesLock for missing-upgrade-167387: {Name:mk749d88d79f2026262246b645af9fa820f61ff5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 23:41:06.450100  231294 start.go:364] duration metric: took 69.287µs to acquireMachinesLock for "missing-upgrade-167387"
	I0703 23:41:06.450127  231294 start.go:96] Skipping create...Using existing machine configuration
	I0703 23:41:06.450135  231294 fix.go:54] fixHost starting: 
	I0703 23:41:06.450379  231294 cli_runner.go:164] Run: docker container inspect missing-upgrade-167387 --format={{.State.Status}}
	I0703 23:41:06.467377  231294 fix.go:112] recreateIfNeeded on missing-upgrade-167387: state=Running err=<nil>
	W0703 23:41:06.467401  231294 fix.go:138] unexpected machine state, will restart: <nil>
	I0703 23:41:06.468816  231294 out.go:177] * Updating the running docker "missing-upgrade-167387" container ...
	I0703 23:41:06.469932  231294 machine.go:94] provisionDockerMachine start ...
	I0703 23:41:06.470034  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:41:06.486951  231294 main.go:141] libmachine: Using SSH client type: native
	I0703 23:41:06.487134  231294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0703 23:41:06.487145  231294 main.go:141] libmachine: About to run SSH command:
	hostname
	I0703 23:41:09.549152  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53178->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:09.549155  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53194->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:12.617168  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45504->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:12.617233  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45488->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:18.697143  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45520->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:18.697158  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45518->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:21.773108  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40092->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:21.773176  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40094->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:27.849155  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40108->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:27.849220  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40104->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:30.921118  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58124->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:30.921117  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58114->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:37.005172  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58130->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:37.005234  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58134->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:40.077056  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45048->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:40.077056  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45046->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:46.157104  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45072->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:46.157101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45064->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:49.229133  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:49.229194  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45080->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:52.230009  231294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:41:52.230037  231294 ubuntu.go:169] provisioning hostname "missing-upgrade-167387"
	I0703 23:41:52.230084  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:41:52.250448  231294 main.go:141] libmachine: Using SSH client type: native
	I0703 23:41:52.250627  231294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0703 23:41:52.250640  231294 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-167387 && echo "missing-upgrade-167387" | sudo tee /etc/hostname
	I0703 23:41:55.305095  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47178->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:55.305100  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47190->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:58.377118  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47192->127.0.0.1:33033: read: connection reset by peer
	I0703 23:41:58.377117  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47198->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:04.457129  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50906->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:04.457200  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50908->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:07.529149  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50918->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:07.529209  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50920->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:13.609127  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57444->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:13.609187  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57458->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:16.681114  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57472->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:16.681131  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57470->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:22.761069  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49800->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:22.761080  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49784->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:25.833099  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49804->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:25.833159  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49816->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:31.913051  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49830->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:31.913112  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49846->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:34.985099  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50540->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:34.985099  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50542->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:41.065106  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50554->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:41.065158  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50544->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:44.137154  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37240->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:44.137154  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37250->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:50.217128  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37264->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:50.217179  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37256->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:53.289102  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40190->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:53.289116  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40202->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:59.373441  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40210->127.0.0.1:33033: read: connection reset by peer
	I0703 23:42:59.373508  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40212->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:02.441152  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39394->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:02.441153  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39390->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:08.521184  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39404->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:08.521238  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39408->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:11.593168  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38898->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:11.593168  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38892->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:17.677108  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38904->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:17.677171  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38906->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:20.749094  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42172->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:20.749098  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42164->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:26.825087  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42180->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:26.825087  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42196->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:29.897131  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40470->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:29.897131  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40472->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:35.977115  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40500->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:35.977170  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40488->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:39.053286  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40516->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:39.053301  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40514->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:45.133179  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41264->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:45.133193  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41254->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:48.201123  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41274->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:48.201126  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41288->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:54.281220  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35716->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:54.281282  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35724->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:57.353101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35734->127.0.0.1:33033: read: connection reset by peer
	I0703 23:43:57.353163  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35742->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:03.433083  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35090->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:03.433105  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35096->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:06.509249  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35098->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:06.509253  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35104->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:12.585089  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35108->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:12.585089  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35124->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:15.661207  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39684->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:15.661207  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39686->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:21.737152  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39716->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:21.737162  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39712->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:24.809217  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44220->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:24.809273  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44224->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:30.889113  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44226->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:30.889117  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44236->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:33.961091  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52476->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:33.961092  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52480->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:40.041080  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52492->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:40.041080  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52508->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:43.117152  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51912->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:43.117152  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51902->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:49.193110  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51928->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:49.193110  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51916->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:52.265193  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59946->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:52.265251  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59948->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:58.345089  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59972->127.0.0.1:33033: read: connection reset by peer
	I0703 23:44:58.345089  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59986->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:01.417098  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51546->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:01.417132  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51538->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:07.501107  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51562->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:07.501169  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51558->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:10.569093  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44060->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:10.569140  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44054->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:16.653120  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44064->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:16.653120  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44066->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:19.721191  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44094->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:19.721254  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:25.801129  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34210->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:25.801206  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34202->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:28.873101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34232->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:28.873099  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34216->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:34.953103  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52074->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:34.953115  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:38.029130  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52098->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:38.029189  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52092->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:41.029661  231294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:45:41.029688  231294 ubuntu.go:169] provisioning hostname "missing-upgrade-167387"
	I0703 23:45:41.029749  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:45:41.051310  231294 main.go:141] libmachine: Using SSH client type: native
	I0703 23:45:41.051554  231294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0703 23:45:41.051578  231294 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-167387 && echo "missing-upgrade-167387" | sudo tee /etc/hostname
	I0703 23:45:44.105137  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50952->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:44.105140  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50968->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:47.177131  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50984->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:47.177176  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50978->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:53.261137  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48618->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:53.261164  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48620->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:56.329224  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48644->127.0.0.1:33033: read: connection reset by peer
	I0703 23:45:56.329226  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48630->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:02.413101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48664->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:02.413101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48660->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:05.485135  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42406->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:05.485134  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42396->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:11.565110  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42414->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:11.565173  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42418->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:14.633438  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55870->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:14.633501  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55882->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:20.713172  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55884->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:20.713230  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55886->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:23.785186  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44968->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:23.785234  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44952->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:26.785392  231294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:46:26.785509  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:46:26.803501  231294 main.go:141] libmachine: Using SSH client type: native
	I0703 23:46:26.803676  231294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0703 23:46:26.803693  231294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-167387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-167387/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-167387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:46:29.865154  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44984->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:29.865154  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44972->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:32.941145  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47008->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:32.941151  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47006->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:39.021122  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47034->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:39.021153  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47038->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:42.089107  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58406->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:42.089164  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58408->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:48.173122  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58422->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:48.173123  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58424->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:51.241169  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54888->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:51.241218  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54886->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:57.325121  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54904->127.0.0.1:33033: read: connection reset by peer
	I0703 23:46:57.325151  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54918->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:00.397080  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59478->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:00.397088  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59464->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:06.473114  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59500->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:06.473167  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59494->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:09.545147  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59518->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:09.545204  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59512->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:15.625115  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34120->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:15.625180  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34128->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:18.701131  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34142->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:18.701187  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34134->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:24.781171  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55670->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:24.781171  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55684->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:27.849095  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55700->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:27.849097  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55702->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:33.929064  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47924->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:33.929066  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47926->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:37.001127  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47942->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:37.001128  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47954->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:43.081119  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41580->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:43.081176  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41564->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:46.153084  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41594->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:46.153135  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41582->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:52.233106  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41620->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:52.233106  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41608->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:55.305194  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38814->127.0.0.1:33033: read: connection reset by peer
	I0703 23:47:55.305193  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38798->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:01.389129  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38816->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:01.389139  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38822->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:04.457180  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43994->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:04.457194  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44000->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:10.537165  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44026->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:10.537223  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44012->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:13.609146  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56510->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:13.609181  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56512->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:19.689118  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56550->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:19.689175  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56534->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:22.761200  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58858->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:22.761200  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58860->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:28.845129  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58872->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:28.845129  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58866->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:31.917125  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53216->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:31.917125  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53206->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:37.993136  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53232->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:37.993187  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53230->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:41.065118  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60332->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:41.065148  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60336->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:47.149091  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60344->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:47.149140  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60358->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:50.217101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40620->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:50.217101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40622->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:56.297267  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40640->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:56.297332  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40638->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:59.369128  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40650->127.0.0.1:33033: read: connection reset by peer
	I0703 23:48:59.369141  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40652->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:05.449211  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51812->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:05.449283  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51828->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:08.521101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51842->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:08.521115  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51844->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:14.601101  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52870->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:14.601159  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52872->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:17.673123  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52886->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:17.673123  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52900->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:23.753092  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45228->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:23.753141  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45230->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:26.825094  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45234->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:26.825146  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45248->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:32.905110  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59614->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:32.905115  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59600->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:35.981126  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59616->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:35.981186  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59620->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:42.057178  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59622->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:42.057178  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59624->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:45.129242  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42616->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:45.129242  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42614->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:51.209138  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42628->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:51.209170  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42642->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:54.281089  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55494->127.0.0.1:33033: read: connection reset by peer
	I0703 23:49:54.281090  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55492->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:00.361148  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55504->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:00.361165  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55506->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:03.433184  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38890->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:03.433193  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38888->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:09.517111  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38896->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:09.517114  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38900->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:12.585117  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42112->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:12.585117  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42114->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:15.585929  231294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:50:15.586034  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:50:15.604858  231294 main.go:141] libmachine: Using SSH client type: native
	I0703 23:50:15.605041  231294 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 127.0.0.1 33033 <nil> <nil>}
	I0703 23:50:15.605058  231294 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-167387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-167387/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-167387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 23:50:18.665170  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42144->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:18.665235  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42130->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:21.737140  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59416->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:21.737192  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59418->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:27.817109  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59422->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:27.817109  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59436->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:30.889163  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40606->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:30.889163  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40608->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:36.969191  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40628->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:36.969246  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40626->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:40.041180  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44856->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:40.041181  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44872->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:46.121160  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44894->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:46.121165  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44888->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:49.193217  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44900->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:49.193217  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44914->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:55.273150  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52296->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:55.273210  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52294->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:58.345203  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52314->127.0.0.1:33033: read: connection reset by peer
	I0703 23:50:58.345288  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52308->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:01.346099  231294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:51:01.346128  231294 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18859-12140/.minikube CaCertPath:/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18859-12140/.minikube}
	I0703 23:51:01.346157  231294 ubuntu.go:177] setting up certificates
	I0703 23:51:01.346170  231294 provision.go:84] configureAuth start
	I0703 23:51:01.346225  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:01.364051  231294 provision.go:143] copyHostCerts
	I0703 23:51:01.364123  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:01.364135  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:01.364213  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:01.364354  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:01.364364  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:01.364393  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:01.364476  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:01.364485  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:01.364511  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:01.364573  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:01.538597  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:01.538661  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:01.538693  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:01.556353  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:04.429105  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:42492->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:04.429174  231294 provision.go:87] duration metric: took 3.082998606s to configureAuth
	W0703 23:51:04.429182  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42492->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:04.429200  231294 retry.go:31] will retry after 140.948µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42492->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:04.429105  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42490->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:04.430295  231294 provision.go:84] configureAuth start
	I0703 23:51:04.430379  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:04.448685  231294 provision.go:143] copyHostCerts
	I0703 23:51:04.448780  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:04.448794  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:04.448854  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:04.448958  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:04.448969  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:04.448998  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:04.449074  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:04.449083  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:04.449111  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:04.449177  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:04.588104  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:04.588173  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:04.588220  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:04.605719  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:07.497138  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:42498->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:07.497176  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42514->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:07.497198  231294 provision.go:87] duration metric: took 3.066889425s to configureAuth
	W0703 23:51:07.497207  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42498->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:07.497218  231294 retry.go:31] will retry after 141.13µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42498->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:07.498355  231294 provision.go:84] configureAuth start
	I0703 23:51:07.498430  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:07.516253  231294 provision.go:143] copyHostCerts
	I0703 23:51:07.516319  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:07.516331  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:07.516395  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:07.516486  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:07.516495  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:07.516529  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:07.516587  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:07.516595  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:07.516618  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:07.516696  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:07.755782  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:07.755838  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:07.755870  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:07.773983  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:10.569168  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:42520->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:10.569181  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60730->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:10.569239  231294 provision.go:87] duration metric: took 3.07086774s to configureAuth
	W0703 23:51:10.569251  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42520->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:10.569264  231294 retry.go:31] will retry after 130.847µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42520->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:10.570388  231294 provision.go:84] configureAuth start
	I0703 23:51:10.570481  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:10.587984  231294 provision.go:143] copyHostCerts
	I0703 23:51:10.588045  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:10.588055  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:10.588118  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:10.588223  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:10.588231  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:10.588254  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:10.588318  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:10.588324  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:10.588345  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:10.588410  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:10.742523  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:10.742584  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:10.742620  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:10.759943  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:51:13.641147  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60748->127.0.0.1:33033: read: connection reset by peer
	W0703 23:51:13.641210  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60738->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:13.641258  231294 provision.go:87] duration metric: took 3.070855327s to configureAuth
	W0703 23:51:13.641273  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60738->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:13.641289  231294 retry.go:31] will retry after 181.732µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60738->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:13.642409  231294 provision.go:84] configureAuth start
	I0703 23:51:13.642484  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:13.660636  231294 provision.go:143] copyHostCerts
	I0703 23:51:13.660702  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:13.660716  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:13.660815  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:13.661125  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:13.661141  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:13.661184  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:13.661315  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:13.661326  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:13.661361  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:13.661450  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:13.886134  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:13.886189  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:13.886222  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:13.903584  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:16.713148  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60758->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:16.713190  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60770->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:16.713206  231294 provision.go:87] duration metric: took 3.07078273s to configureAuth
	W0703 23:51:16.713216  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60758->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:16.713257  231294 retry.go:31] will retry after 314.392µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60758->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:16.714373  231294 provision.go:84] configureAuth start
	I0703 23:51:16.714440  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:16.732282  231294 provision.go:143] copyHostCerts
	I0703 23:51:16.732339  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:16.732349  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:16.732402  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:16.732492  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:16.732500  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:16.732519  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:16.732584  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:16.732591  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:16.732615  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:16.732685  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:16.949654  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:16.949724  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:16.949772  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:16.967520  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:19.785096  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60786->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:19.785125  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45412->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:19.785165  231294 provision.go:87] duration metric: took 3.070780464s to configureAuth
	W0703 23:51:19.785177  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60786->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:19.785194  231294 retry.go:31] will retry after 801.347µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:60786->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:19.786323  231294 provision.go:84] configureAuth start
	I0703 23:51:19.786400  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:19.804567  231294 provision.go:143] copyHostCerts
	I0703 23:51:19.804617  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:19.804635  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:19.804704  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:19.804835  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:19.804845  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:19.804871  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:19.804941  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:19.804950  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:19.804979  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:19.805043  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:19.988386  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:19.988440  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:19.988481  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:20.006753  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:51:22.857244  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45434->127.0.0.1:33033: read: connection reset by peer
	W0703 23:51:22.857300  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45420->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:22.857344  231294 provision.go:87] duration metric: took 3.07100542s to configureAuth
	W0703 23:51:22.857352  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45420->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:22.857363  231294 retry.go:31] will retry after 924.099µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45420->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:22.858481  231294 provision.go:84] configureAuth start
	I0703 23:51:22.858555  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:22.876319  231294 provision.go:143] copyHostCerts
	I0703 23:51:22.876390  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:22.876405  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:22.876468  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:22.876574  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:22.876588  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:22.876615  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:22.876694  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:22.876703  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:22.876729  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:22.876834  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:22.995935  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:22.995991  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:22.996033  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:23.013399  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:25.929169  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45444->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:25.929195  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45458->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:25.929240  231294 provision.go:87] duration metric: took 3.070742607s to configureAuth
	W0703 23:51:25.929249  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45444->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:25.929259  231294 retry.go:31] will retry after 2.076779ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45444->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:25.931377  231294 provision.go:84] configureAuth start
	I0703 23:51:25.931452  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:25.951033  231294 provision.go:143] copyHostCerts
	I0703 23:51:25.951083  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:25.951090  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:25.951148  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:25.951227  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:25.951235  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:25.951257  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:25.951306  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:25.951313  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:25.951335  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:25.951383  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:25.999651  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:25.999710  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:25.999742  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:26.017210  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:29.001145  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45468->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:29.001170  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45470->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:29.001207  231294 provision.go:87] duration metric: took 3.069813492s to configureAuth
	W0703 23:51:29.001216  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45468->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:29.001227  231294 retry.go:31] will retry after 1.972793ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45468->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:29.003392  231294 provision.go:84] configureAuth start
	I0703 23:51:29.003461  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:29.022337  231294 provision.go:143] copyHostCerts
	I0703 23:51:29.022394  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:29.022401  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:29.022459  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:29.022541  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:29.022549  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:29.022570  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:29.022625  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:29.022633  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:29.022652  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:29.022699  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:29.123716  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:29.123777  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:29.123813  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:29.143659  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:32.073127  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45486->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:32.073148  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54650->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:32.073190  231294 provision.go:87] duration metric: took 3.069782044s to configureAuth
	W0703 23:51:32.073198  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45486->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:32.073213  231294 retry.go:31] will retry after 5.668154ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45486->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:32.079400  231294 provision.go:84] configureAuth start
	I0703 23:51:32.079471  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:32.097277  231294 provision.go:143] copyHostCerts
	I0703 23:51:32.097345  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:32.097353  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:32.097408  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:32.097496  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:32.097505  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:32.097523  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:32.097573  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:32.097581  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:32.097597  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:32.097644  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:32.273651  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:32.273710  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:32.273743  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:32.291199  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:35.145107  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54666->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:35.145117  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54672->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:35.145181  231294 provision.go:87] duration metric: took 3.065765189s to configureAuth
	W0703 23:51:35.145193  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54666->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:35.145209  231294 retry.go:31] will retry after 3.407577ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54666->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:35.149398  231294 provision.go:84] configureAuth start
	I0703 23:51:35.149464  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:35.167630  231294 provision.go:143] copyHostCerts
	I0703 23:51:35.167683  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:35.167695  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:35.167760  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:35.167861  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:35.167871  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:35.167905  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:35.167981  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:35.167992  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:35.168024  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:35.168085  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:35.269344  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:35.269407  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:35.269451  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:35.287187  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:51:38.217125  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54698->127.0.0.1:33033: read: connection reset by peer
	W0703 23:51:38.217175  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54682->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:38.217218  231294 provision.go:87] duration metric: took 3.067806666s to configureAuth
	W0703 23:51:38.217227  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54682->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:38.217236  231294 retry.go:31] will retry after 9.382155ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54682->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:38.227432  231294 provision.go:84] configureAuth start
	I0703 23:51:38.227519  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:38.244837  231294 provision.go:143] copyHostCerts
	I0703 23:51:38.244903  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:38.244917  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:38.244978  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:38.245077  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:38.245087  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:38.245117  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:38.245194  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:38.245203  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:38.245228  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:38.245298  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:38.380327  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:38.380407  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:38.380444  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:38.397688  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:51:41.289094  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45406->127.0.0.1:33033: read: connection reset by peer
	W0703 23:51:41.289139  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54712->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:41.289181  231294 provision.go:87] duration metric: took 3.061717168s to configureAuth
	W0703 23:51:41.289188  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54712->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:41.289201  231294 retry.go:31] will retry after 12.510595ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54712->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:41.302418  231294 provision.go:84] configureAuth start
	I0703 23:51:41.302507  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:41.319732  231294 provision.go:143] copyHostCerts
	I0703 23:51:41.319800  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:41.319812  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:41.319884  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:41.319985  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:41.319998  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:41.320028  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:41.320100  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:41.320109  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:41.320140  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:41.320203  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:41.604239  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:41.604294  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:41.604327  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:41.621945  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:44.361195  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45408->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:44.361226  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45416->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:44.361268  231294 provision.go:87] duration metric: took 3.058827407s to configureAuth
	W0703 23:51:44.361275  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45408->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:44.361285  231294 retry.go:31] will retry after 13.217572ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45408->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:44.375476  231294 provision.go:84] configureAuth start
	I0703 23:51:44.375546  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:44.393989  231294 provision.go:143] copyHostCerts
	I0703 23:51:44.394062  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:44.394074  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:44.394152  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:44.394251  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:44.394261  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:44.394295  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:44.394380  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:44.394387  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:44.394418  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:44.394469  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:44.519551  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:44.519623  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:44.519674  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:44.536963  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:51:47.433158  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45448->127.0.0.1:33033: read: connection reset by peer
	W0703 23:51:47.433162  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45432->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:47.433236  231294 provision.go:87] duration metric: took 3.057739562s to configureAuth
	W0703 23:51:47.433242  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45432->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:47.433254  231294 retry.go:31] will retry after 40.205805ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45432->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:47.474518  231294 provision.go:84] configureAuth start
	I0703 23:51:47.474598  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:47.491522  231294 provision.go:143] copyHostCerts
	I0703 23:51:47.491571  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:47.491580  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:47.491634  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:47.491711  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:47.491719  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:47.491737  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:47.491785  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:47.491792  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:47.491808  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:47.491853  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:47.602087  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:47.602143  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:47.602175  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:47.620347  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:50.505158  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45460->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:50.505187  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43460->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:50.505229  231294 provision.go:87] duration metric: took 3.030687673s to configureAuth
	W0703 23:51:50.505239  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45460->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:50.505249  231294 retry.go:31] will retry after 39.020113ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45460->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:50.544471  231294 provision.go:84] configureAuth start
	I0703 23:51:50.544568  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:50.562271  231294 provision.go:143] copyHostCerts
	I0703 23:51:50.562334  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:50.562348  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:50.562411  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:50.562531  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:50.562543  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:50.562570  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:50.562646  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:50.562657  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:50.562680  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:50.562747  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:50.796449  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:50.796499  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:50.796533  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:50.813696  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:53.577091  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43468->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:53.577159  231294 provision.go:87] duration metric: took 3.032661118s to configureAuth
	W0703 23:51:53.577171  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43468->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:53.577190  231294 retry.go:31] will retry after 43.247685ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43468->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:53.577248  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43482->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:53.621422  231294 provision.go:84] configureAuth start
	I0703 23:51:53.621541  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:53.638903  231294 provision.go:143] copyHostCerts
	I0703 23:51:53.638967  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:53.638978  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:53.639044  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:53.639121  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:53.639129  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:53.639152  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:53.639201  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:53.639207  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:53.639227  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:53.639272  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:53.759367  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:53.759422  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:53.759453  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:53.778147  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:51:56.649179  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43492->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:56.649211  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43502->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:56.649233  231294 provision.go:87] duration metric: took 3.027788213s to configureAuth
	W0703 23:51:56.649240  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43492->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:56.649252  231294 retry.go:31] will retry after 121.629204ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43492->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:56.771535  231294 provision.go:84] configureAuth start
	I0703 23:51:56.771636  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:56.790456  231294 provision.go:143] copyHostCerts
	I0703 23:51:56.790513  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:56.790520  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:56.790573  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:56.790659  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:56.790667  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:56.790685  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:56.790736  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:56.790743  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:56.790759  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:56.790806  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:51:57.099834  231294 provision.go:177] copyRemoteCerts
	I0703 23:51:57.099888  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:51:57.099920  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:51:57.117649  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:51:59.721148  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43514->127.0.0.1:33033: read: connection reset by peer
	W0703 23:51:59.721209  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43504->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:59.721272  231294 provision.go:87] duration metric: took 2.949701285s to configureAuth
	W0703 23:51:59.721284  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43504->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:59.721302  231294 retry.go:31] will retry after 152.36911ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:43504->127.0.0.1:33033: read: connection reset by peer
	I0703 23:51:59.874616  231294 provision.go:84] configureAuth start
	I0703 23:51:59.874742  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:51:59.892888  231294 provision.go:143] copyHostCerts
	I0703 23:51:59.892941  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:51:59.892948  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:51:59.893004  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:51:59.893083  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:51:59.893091  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:51:59.893112  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:51:59.893163  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:51:59.893170  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:51:59.893189  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:51:59.893239  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:00.221803  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:00.221855  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:00.221886  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:00.239618  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:52:02.793146  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56194->127.0.0.1:33033: read: connection reset by peer
	W0703 23:52:02.793197  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:56186->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:02.793252  231294 provision.go:87] duration metric: took 2.918604393s to configureAuth
	W0703 23:52:02.793263  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56186->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:02.793277  231294 retry.go:31] will retry after 116.038151ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56186->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:02.909484  231294 provision.go:84] configureAuth start
	I0703 23:52:02.909599  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:02.927287  231294 provision.go:143] copyHostCerts
	I0703 23:52:02.927351  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:02.927364  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:02.927429  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:02.927520  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:02.927531  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:02.927559  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:02.927636  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:02.927645  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:02.927672  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:02.927736  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:03.018341  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:03.018415  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:03.018460  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:03.035926  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:52:05.865173  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:56202->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:05.865241  231294 provision.go:87] duration metric: took 2.955731546s to configureAuth
	W0703 23:52:05.865252  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56202->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:05.865262  231294 retry.go:31] will retry after 206.05751ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56202->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:05.865321  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56218->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:06.071572  231294 provision.go:84] configureAuth start
	I0703 23:52:06.071671  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:06.088340  231294 provision.go:143] copyHostCerts
	I0703 23:52:06.088413  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:06.088426  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:06.088496  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:06.088582  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:06.088591  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:06.088617  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:06.088668  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:06.088675  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:06.088694  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:06.088741  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:06.231043  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:06.231103  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:06.231137  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:06.248342  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:52:08.937119  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:56226->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:08.937141  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56228->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:08.937173  231294 provision.go:87] duration metric: took 2.865565278s to configureAuth
	W0703 23:52:08.937179  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56226->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:08.937192  231294 retry.go:31] will retry after 671.401617ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56226->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:09.608950  231294 provision.go:84] configureAuth start
	I0703 23:52:09.609032  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:09.626491  231294 provision.go:143] copyHostCerts
	I0703 23:52:09.626553  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:09.626561  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:09.626622  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:09.626703  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:09.626710  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:09.626733  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:09.626782  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:09.626789  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:09.626809  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:09.626854  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:09.840939  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:09.841009  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:09.841054  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:09.858296  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:52:12.009153  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41626->127.0.0.1:33033: read: connection reset by peer
	W0703 23:52:12.009211  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:41624->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:12.009267  231294 provision.go:87] duration metric: took 2.40029509s to configureAuth
	W0703 23:52:12.009278  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41624->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:12.009292  231294 retry.go:31] will retry after 828.274049ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41624->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:12.838213  231294 provision.go:84] configureAuth start
	I0703 23:52:12.838309  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:12.855377  231294 provision.go:143] copyHostCerts
	I0703 23:52:12.855441  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:12.855455  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:12.855512  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:12.855604  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:12.855616  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:12.855642  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:12.855707  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:12.855716  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:12.855741  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:12.855806  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:13.354476  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:13.354544  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:13.354583  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:13.371977  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:52:15.081193  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41646->127.0.0.1:33033: read: connection reset by peer
	W0703 23:52:15.081256  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:41630->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:15.081274  231294 retry.go:31] will retry after 184.718067ms: ssh: handshake failed: read tcp 127.0.0.1:41630->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:18.153163  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41656->127.0.0.1:33033: read: connection reset by peer
	W0703 23:52:18.153221  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:41650->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:18.153274  231294 provision.go:87] duration metric: took 5.315033522s to configureAuth
	W0703 23:52:18.153286  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41650->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:18.153300  231294 retry.go:31] will retry after 1.645824439s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:41650->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:19.800030  231294 provision.go:84] configureAuth start
	I0703 23:52:19.800111  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:19.817462  231294 provision.go:143] copyHostCerts
	I0703 23:52:19.817515  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:19.817522  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:19.817586  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:19.817664  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:19.817672  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:19.817698  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:19.817749  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:19.817756  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:19.817775  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:19.817826  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:20.038356  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:20.038426  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:20.038459  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:20.057162  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:52:21.225139  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59312->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:21.225168  231294 retry.go:31] will retry after 202.1552ms: ssh: handshake failed: read tcp 127.0.0.1:59312->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:21.225224  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59328->127.0.0.1:33033: read: connection reset by peer
	W0703 23:52:24.297081  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59342->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:24.297484  231294 provision.go:87] duration metric: took 4.49741501s to configureAuth
	W0703 23:52:24.297520  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59342->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:24.297539  231294 retry.go:31] will retry after 1.984701781s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59342->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:24.297496  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59354->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:26.282381  231294 provision.go:84] configureAuth start
	I0703 23:52:26.282499  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:26.299679  231294 provision.go:143] copyHostCerts
	I0703 23:52:26.299742  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:26.299754  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:26.299818  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:26.299901  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:26.299909  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:26.299931  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:26.299981  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:26.299988  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:26.300008  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:26.300056  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:26.493659  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:26.493727  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:26.493775  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:26.510762  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:52:27.369146  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59366->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:27.369171  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59380->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:27.369178  231294 retry.go:31] will retry after 221.687435ms: ssh: handshake failed: read tcp 127.0.0.1:59366->127.0.0.1:33033: read: connection reset by peer
	W0703 23:52:30.665161  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59396->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:30.665247  231294 provision.go:87] duration metric: took 4.382838396s to configureAuth
	W0703 23:52:30.665259  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59396->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:30.665169  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45354->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:30.665272  231294 retry.go:31] will retry after 2.469917789s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59396->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:33.136552  231294 provision.go:84] configureAuth start
	I0703 23:52:33.136644  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:33.154018  231294 provision.go:143] copyHostCerts
	I0703 23:52:33.154083  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:33.154096  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:33.154171  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:33.154266  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:33.154278  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:33.154315  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:33.154386  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:33.154396  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:33.154437  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:33.154502  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:33.289693  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:33.289773  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:33.289812  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:33.307134  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:52:33.737134  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45358->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:33.737151  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45370->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:33.737161  231294 retry.go:31] will retry after 169.762375ms: ssh: handshake failed: read tcp 127.0.0.1:45358->127.0.0.1:33033: read: connection reset by peer
	W0703 23:52:36.973083  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:45378->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:36.973093  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45390->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:36.973152  231294 provision.go:87] duration metric: took 3.836568425s to configureAuth
	W0703 23:52:36.973163  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45378->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:36.973173  231294 retry.go:31] will retry after 3.079000076s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:45378->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:40.041196  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54194->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:40.052391  231294 provision.go:84] configureAuth start
	I0703 23:52:40.052530  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:40.069108  231294 provision.go:143] copyHostCerts
	I0703 23:52:40.069159  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:40.069167  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:40.069228  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:40.069304  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:40.069325  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:40.069351  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:40.069403  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:40.069411  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:40.069430  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:40.069479  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:40.171103  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:40.171160  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:40.171195  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:40.188392  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	I0703 23:52:43.241091  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54212->127.0.0.1:33033: read: connection reset by peer
	W0703 23:52:43.241142  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54208->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:43.241193  231294 provision.go:87] duration metric: took 3.18878246s to configureAuth
	W0703 23:52:43.241202  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54208->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:43.241211  231294 retry.go:31] will retry after 5.575273666s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54208->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:46.317127  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54224->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:48.816830  231294 provision.go:84] configureAuth start
	I0703 23:52:48.816948  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:52:48.834252  231294 provision.go:143] copyHostCerts
	I0703 23:52:48.834320  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:52:48.834331  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:52:48.834393  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:52:48.834478  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:52:48.834488  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:52:48.834511  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:52:48.834580  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:52:48.834588  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:52:48.834608  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:52:48.834674  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:52:48.931804  231294 provision.go:177] copyRemoteCerts
	I0703 23:52:48.931861  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:52:48.931895  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:52:48.949180  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:52:52.009158  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:54230->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:52.009170  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54240->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:52.009234  231294 provision.go:87] duration metric: took 3.192376688s to configureAuth
	W0703 23:52:52.009246  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54230->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:52.009265  231294 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:54230->127.0.0.1:33033: read: connection reset by peer
	I0703 23:52:52.009276  231294 machine.go:97] duration metric: took 17m55.240131053s to provisionDockerMachine
	I0703 23:52:52.009296  231294 client.go:171] duration metric: took 18m3.905049735s to LocalClient.Create
	I0703 23:52:55.081088  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58902->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:01.161127  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58910->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:04.233134  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44972->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:10.313124  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44980->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:13.385093  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39064->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:19.465143  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39066->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:22.541080  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38922->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:28.621102  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38936->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:31.689104  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50372->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:37.773116  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50380->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:40.841177  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54606->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:46.921155  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54614->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:49.993250  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38920->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:56.073161  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38926->127.0.0.1:33033: read: connection reset by peer
	I0703 23:53:59.145168  231294 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38936->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:02.145474  231294 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 23:54:02.145509  231294 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18859-12140/.minikube CaCertPath:/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18859-12140/.minikube}
	I0703 23:54:02.145530  231294 ubuntu.go:177] setting up certificates
	I0703 23:54:02.145542  231294 provision.go:84] configureAuth start
	I0703 23:54:02.145613  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:02.162900  231294 provision.go:143] copyHostCerts
	I0703 23:54:02.162952  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:02.162959  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:02.163022  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:02.163105  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:02.163113  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:02.163135  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:02.163190  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:02.163198  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:02.163217  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:02.163266  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:02.561396  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:02.561461  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:02.561506  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:02.579282  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:05.641174  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44456->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:05.641247  231294 provision.go:87] duration metric: took 3.495700902s to configureAuth
	W0703 23:54:05.641256  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44456->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:05.641271  231294 retry.go:31] will retry after 145.707µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44456->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:05.642370  231294 provision.go:84] configureAuth start
	I0703 23:54:05.642442  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:05.659750  231294 provision.go:143] copyHostCerts
	I0703 23:54:05.659820  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:05.659831  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:05.659886  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:05.659983  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:05.659992  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:05.660011  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:05.660076  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:05.660083  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:05.660102  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:05.660153  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:05.751434  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:05.751495  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:05.751534  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:05.769181  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:08.713202  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44470->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:08.713284  231294 provision.go:87] duration metric: took 3.070899774s to configureAuth
	W0703 23:54:08.713292  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44470->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:08.713306  231294 retry.go:31] will retry after 219.007µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44470->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:08.714404  231294 provision.go:84] configureAuth start
	I0703 23:54:08.714459  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:08.732220  231294 provision.go:143] copyHostCerts
	I0703 23:54:08.732277  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:08.732286  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:08.732350  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:08.732456  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:08.732466  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:08.732488  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:08.732557  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:08.732565  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:08.732588  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:08.732664  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:08.856285  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:08.856343  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:08.856385  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:08.873569  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:11.785189  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:44480->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:11.785268  231294 provision.go:87] duration metric: took 3.070854213s to configureAuth
	W0703 23:54:11.785277  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44480->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:11.785286  231294 retry.go:31] will retry after 235.198µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:44480->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:11.786389  231294 provision.go:84] configureAuth start
	I0703 23:54:11.786455  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:11.803171  231294 provision.go:143] copyHostCerts
	I0703 23:54:11.803237  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:11.803250  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:11.803307  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:11.803449  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:11.803463  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:11.803495  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:11.803589  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:11.803599  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:11.803633  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:11.803703  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:11.960653  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:11.960706  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:11.960740  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:11.977934  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:14.857219  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:48218->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:14.857311  231294 provision.go:87] duration metric: took 3.070912148s to configureAuth
	W0703 23:54:14.857322  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:48218->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:14.857332  231294 retry.go:31] will retry after 210.109µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:48218->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:14.858443  231294 provision.go:84] configureAuth start
	I0703 23:54:14.858538  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:14.876361  231294 provision.go:143] copyHostCerts
	I0703 23:54:14.876424  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:14.876433  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:14.876481  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:14.876572  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:14.876580  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:14.876598  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:14.876650  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:14.876656  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:14.876674  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:14.876720  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:15.105446  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:15.105500  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:15.105533  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:15.122648  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:17.929245  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:48220->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:17.929344  231294 provision.go:87] duration metric: took 3.070878956s to configureAuth
	W0703 23:54:17.929356  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:48220->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:17.929368  231294 retry.go:31] will retry after 725.4µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:48220->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:17.930488  231294 provision.go:84] configureAuth start
	I0703 23:54:17.930589  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:17.947782  231294 provision.go:143] copyHostCerts
	I0703 23:54:17.947843  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:17.947856  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:17.947926  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:17.948023  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:17.948033  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:17.948071  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:17.948143  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:17.948152  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:17.948183  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:17.948249  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:18.130217  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:18.130294  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:18.130345  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:18.147622  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:21.001271  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:48234->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:21.001364  231294 provision.go:87] duration metric: took 3.070856426s to configureAuth
	W0703 23:54:21.001376  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:48234->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:21.001387  231294 retry.go:31] will retry after 534.083µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:48234->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:21.002493  231294 provision.go:84] configureAuth start
	I0703 23:54:21.002579  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:21.020098  231294 provision.go:143] copyHostCerts
	I0703 23:54:21.020179  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:21.020193  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:21.020337  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:21.020460  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:21.020472  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:21.020507  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:21.020608  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:21.020619  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:21.020651  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:21.020729  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:21.309476  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:21.309535  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:21.309568  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:21.326617  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:24.073210  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:47596->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:24.073296  231294 provision.go:87] duration metric: took 3.070790205s to configureAuth
	W0703 23:54:24.073304  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47596->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:24.073318  231294 retry.go:31] will retry after 812.118µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47596->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:24.074421  231294 provision.go:84] configureAuth start
	I0703 23:54:24.074514  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:24.091604  231294 provision.go:143] copyHostCerts
	I0703 23:54:24.091667  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:24.091679  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:24.091746  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:24.091839  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:24.091851  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:24.091885  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:24.091952  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:24.091963  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:24.091992  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:24.092055  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:24.321172  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:24.321228  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:24.321263  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:24.338542  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:27.145214  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:47606->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:27.145297  231294 provision.go:87] duration metric: took 3.070854615s to configureAuth
	W0703 23:54:27.145304  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47606->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:27.145321  231294 retry.go:31] will retry after 952.597µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47606->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:27.146422  231294 provision.go:84] configureAuth start
	I0703 23:54:27.146482  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:27.163825  231294 provision.go:143] copyHostCerts
	I0703 23:54:27.163885  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:27.163896  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:27.163952  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:27.164039  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:27.164048  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:27.164069  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:27.164138  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:27.164146  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:27.164166  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:27.164220  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:27.518464  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:27.518528  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:27.518562  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:27.535751  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:30.217232  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:47616->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:30.217311  231294 provision.go:87] duration metric: took 3.07087906s to configureAuth
	W0703 23:54:30.217320  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47616->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:30.217331  231294 retry.go:31] will retry after 2.791894ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47616->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:30.220500  231294 provision.go:84] configureAuth start
	I0703 23:54:30.220564  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:30.238327  231294 provision.go:143] copyHostCerts
	I0703 23:54:30.238384  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:30.238391  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:30.238449  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:30.238527  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:30.238535  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:30.238557  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:30.238620  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:30.238627  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:30.238648  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:30.238695  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:30.452110  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:30.452171  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:30.452205  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:30.469953  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:33.289228  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39660->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:33.289320  231294 provision.go:87] duration metric: took 3.068807949s to configureAuth
	W0703 23:54:33.289332  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39660->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:33.289348  231294 retry.go:31] will retry after 5.205975ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39660->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:33.295520  231294 provision.go:84] configureAuth start
	I0703 23:54:33.295615  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:33.313715  231294 provision.go:143] copyHostCerts
	I0703 23:54:33.313774  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:33.313786  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:33.313853  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:33.313934  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:33.313942  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:33.313965  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:33.314019  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:33.314026  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:33.314046  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:33.314096  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:33.544966  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:33.545023  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:33.545059  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:33.562087  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:36.361180  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39676->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:36.361268  231294 provision.go:87] duration metric: took 3.065726866s to configureAuth
	W0703 23:54:36.361277  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39676->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:36.361292  231294 retry.go:31] will retry after 4.889975ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39676->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:36.366510  231294 provision.go:84] configureAuth start
	I0703 23:54:36.366582  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:36.385042  231294 provision.go:143] copyHostCerts
	I0703 23:54:36.385101  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:36.385108  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:36.385158  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:36.385234  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:36.385242  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:36.385260  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:36.385314  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:36.385321  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:36.385337  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:36.385391  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:36.586238  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:36.586298  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:36.586332  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:36.604010  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:39.433198  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39680->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:39.433275  231294 provision.go:87] duration metric: took 3.066744639s to configureAuth
	W0703 23:54:39.433283  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39680->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:39.433292  231294 retry.go:31] will retry after 4.698761ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39680->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:39.438465  231294 provision.go:84] configureAuth start
	I0703 23:54:39.438544  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:39.456235  231294 provision.go:143] copyHostCerts
	I0703 23:54:39.456293  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:39.456301  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:39.456365  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:39.456512  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:39.456521  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:39.456544  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:39.456600  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:39.456608  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:39.456627  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:39.456673  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:39.643866  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:39.643919  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:39.643955  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:39.661498  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:42.505242  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:39682->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:42.505331  231294 provision.go:87] duration metric: took 3.066850514s to configureAuth
	W0703 23:54:42.505344  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39682->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:42.505362  231294 retry.go:31] will retry after 8.088632ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:39682->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:42.514547  231294 provision.go:84] configureAuth start
	I0703 23:54:42.514639  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:42.531837  231294 provision.go:143] copyHostCerts
	I0703 23:54:42.531900  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:42.531914  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:42.531972  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:42.532074  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:42.532084  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:42.532117  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:42.532189  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:42.532198  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:42.532223  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:42.532290  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:42.610477  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:42.610531  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:42.610562  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:42.628040  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:45.577185  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:42074->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:45.577271  231294 provision.go:87] duration metric: took 3.062686182s to configureAuth
	W0703 23:54:45.577283  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42074->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:45.577303  231294 retry.go:31] will retry after 10.934798ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42074->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:45.588491  231294 provision.go:84] configureAuth start
	I0703 23:54:45.588570  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:45.606139  231294 provision.go:143] copyHostCerts
	I0703 23:54:45.606210  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:45.606222  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:45.606298  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:45.606402  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:45.606411  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:45.606445  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:45.606512  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:45.606521  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:45.606552  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:45.606617  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:45.889818  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:45.889879  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:45.889914  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:45.907272  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:48.649238  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:42078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:48.649325  231294 provision.go:87] duration metric: took 3.060811301s to configureAuth
	W0703 23:54:48.649338  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:48.649359  231294 retry.go:31] will retry after 34.913919ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:48.684560  231294 provision.go:84] configureAuth start
	I0703 23:54:48.684665  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:48.702913  231294 provision.go:143] copyHostCerts
	I0703 23:54:48.702978  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:48.702989  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:48.703063  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:48.703191  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:48.703200  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:48.703235  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:48.703304  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:48.703315  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:48.703345  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:48.703414  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:48.921485  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:48.921540  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:48.921572  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:48.939143  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:51.721162  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:42088->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:51.721260  231294 provision.go:87] duration metric: took 3.036665474s to configureAuth
	W0703 23:54:51.721271  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42088->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:51.721282  231294 retry.go:31] will retry after 61.045143ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:42088->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:51.782444  231294 provision.go:84] configureAuth start
	I0703 23:54:51.782554  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:51.800375  231294 provision.go:143] copyHostCerts
	I0703 23:54:51.800433  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:51.800440  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:51.800488  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:51.800569  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:51.800577  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:51.800594  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:51.800646  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:51.800653  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:51.800669  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:51.800716  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:52.202785  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:52.202897  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:52.202967  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:52.220173  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:54.793232  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:35128->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:54.793351  231294 provision.go:87] duration metric: took 3.010870058s to configureAuth
	W0703 23:54:54.793364  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35128->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:54.793404  231294 retry.go:31] will retry after 56.33637ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35128->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:54.850676  231294 provision.go:84] configureAuth start
	I0703 23:54:54.850816  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:54.867811  231294 provision.go:143] copyHostCerts
	I0703 23:54:54.867892  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:54.867905  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:54.867976  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:54.868083  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:54.868094  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:54.868128  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:54.868198  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:54.868209  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:54.868241  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:54.868312  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:55.073444  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:55.073551  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:55.073597  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:55.091572  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:54:57.865211  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:35134->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:57.865287  231294 provision.go:87] duration metric: took 3.014582202s to configureAuth
	W0703 23:54:57.865296  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35134->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:57.865308  231294 retry.go:31] will retry after 97.761144ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35134->127.0.0.1:33033: read: connection reset by peer
	I0703 23:54:57.963579  231294 provision.go:84] configureAuth start
	I0703 23:54:57.963686  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:54:57.980169  231294 provision.go:143] copyHostCerts
	I0703 23:54:57.980227  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:54:57.980238  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:54:57.980296  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:54:57.980375  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:54:57.980391  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:54:57.980414  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:54:57.980464  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:54:57.980471  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:54:57.980490  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:54:57.980564  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:54:58.313165  231294 provision.go:177] copyRemoteCerts
	I0703 23:54:58.313224  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:54:58.313258  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:54:58.330848  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:00.937180  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:35146->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:00.937269  231294 provision.go:87] duration metric: took 2.973662299s to configureAuth
	W0703 23:55:00.937277  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35146->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:00.937292  231294 retry.go:31] will retry after 189.547566ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:35146->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:01.127720  231294 provision.go:84] configureAuth start
	I0703 23:55:01.127857  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:01.144990  231294 provision.go:143] copyHostCerts
	I0703 23:55:01.145053  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:01.145064  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:01.145110  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:01.145188  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:01.145195  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:01.145212  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:01.145260  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:01.145267  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:01.145282  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:01.145326  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:01.257411  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:01.257472  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:01.257512  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:01.274973  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:04.009102  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37248->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:04.009198  231294 provision.go:87] duration metric: took 2.881434302s to configureAuth
	W0703 23:55:04.009210  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37248->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:04.009223  231294 retry.go:31] will retry after 193.237174ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37248->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:04.202535  231294 provision.go:84] configureAuth start
	I0703 23:55:04.202646  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:04.219759  231294 provision.go:143] copyHostCerts
	I0703 23:55:04.219820  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:04.219831  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:04.219893  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:04.219971  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:04.219979  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:04.220004  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:04.220056  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:04.220062  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:04.220082  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:04.220128  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:04.419216  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:04.419269  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:04.419301  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:04.436879  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:07.081243  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37262->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:07.081329  231294 provision.go:87] duration metric: took 2.878761074s to configureAuth
	W0703 23:55:07.081338  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37262->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:07.081351  231294 retry.go:31] will retry after 188.367825ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37262->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:07.270724  231294 provision.go:84] configureAuth start
	I0703 23:55:07.270817  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:07.288106  231294 provision.go:143] copyHostCerts
	I0703 23:55:07.288162  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:07.288170  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:07.288230  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:07.288310  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:07.288318  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:07.288336  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:07.288388  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:07.288396  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:07.288412  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:07.288460  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:07.436186  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:07.436258  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:07.436289  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:07.453766  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:10.153229  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37268->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:10.153322  231294 provision.go:87] duration metric: took 2.882557299s to configureAuth
	W0703 23:55:10.153333  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37268->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:10.153344  231294 retry.go:31] will retry after 475.997513ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37268->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:10.630016  231294 provision.go:84] configureAuth start
	I0703 23:55:10.630117  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:10.647397  231294 provision.go:143] copyHostCerts
	I0703 23:55:10.647454  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:10.647460  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:10.647514  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:10.647590  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:10.647597  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:10.647619  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:10.647679  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:10.647685  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:10.647707  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:10.647754  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:10.842501  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:10.842567  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:10.842601  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:10.859654  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:13.225224  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:36280->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:13.225301  231294 provision.go:87] duration metric: took 2.595245266s to configureAuth
	W0703 23:55:13.225309  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36280->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:13.225320  231294 retry.go:31] will retry after 745.127338ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36280->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:13.971198  231294 provision.go:84] configureAuth start
	I0703 23:55:13.971301  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:13.989151  231294 provision.go:143] copyHostCerts
	I0703 23:55:13.989220  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:13.989233  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:13.989321  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:13.989419  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:13.989429  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:13.989460  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:13.989529  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:13.989538  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:13.989570  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:13.989651  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:14.073195  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:14.073254  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:14.073301  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:14.090991  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:16.297237  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:36296->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:16.297331  231294 provision.go:87] duration metric: took 2.32609713s to configureAuth
	W0703 23:55:16.297343  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36296->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:16.297358  231294 retry.go:31] will retry after 1.606493015s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:36296->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:17.904596  231294 provision.go:84] configureAuth start
	I0703 23:55:17.904727  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:17.922143  231294 provision.go:143] copyHostCerts
	I0703 23:55:17.922215  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:17.922229  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:17.922321  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:17.922431  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:17.922442  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:17.922474  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:17.922554  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:17.922563  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:17.922588  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:17.922657  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:18.182532  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:18.182601  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:18.182649  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:18.199816  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:19.369200  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:36312->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:19.369232  231294 retry.go:31] will retry after 311.14766ms: ssh: handshake failed: read tcp 127.0.0.1:36312->127.0.0.1:33033: read: connection reset by peer
	W0703 23:55:22.441288  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:52966->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:22.441370  231294 provision.go:87] duration metric: took 4.536737887s to configureAuth
	W0703 23:55:22.441379  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52966->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:22.441393  231294 retry.go:31] will retry after 905.717209ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52966->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:23.347398  231294 provision.go:84] configureAuth start
	I0703 23:55:23.347494  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:23.363818  231294 provision.go:143] copyHostCerts
	I0703 23:55:23.363873  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:23.363880  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:23.363936  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:23.364013  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:23.364021  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:23.364039  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:23.364087  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:23.364094  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:23.364110  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:23.364154  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:23.481280  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:23.481338  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:23.481374  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:23.498507  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:25.513221  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:52982->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:25.513321  231294 provision.go:87] duration metric: took 2.165894188s to configureAuth
	W0703 23:55:25.513332  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52982->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:25.513343  231294 retry.go:31] will retry after 2.539713448s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52982->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:28.054294  231294 provision.go:84] configureAuth start
	I0703 23:55:28.054413  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:28.072120  231294 provision.go:143] copyHostCerts
	I0703 23:55:28.072172  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:28.072180  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:28.072241  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:28.072319  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:28.072327  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:28.072352  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:28.072404  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:28.072412  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:28.072431  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:28.072477  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:28.326256  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:28.326325  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:28.326357  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:28.343602  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:28.585141  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:52996->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:28.585172  231294 retry.go:31] will retry after 197.666638ms: ssh: handshake failed: read tcp 127.0.0.1:52996->127.0.0.1:33033: read: connection reset by peer
	W0703 23:55:31.849180  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:53000->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:31.849274  231294 provision.go:87] duration metric: took 3.794944338s to configureAuth
	W0703 23:55:31.849286  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53000->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:31.849300  231294 retry.go:31] will retry after 3.452792489s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:53000->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:35.304876  231294 provision.go:84] configureAuth start
	I0703 23:55:35.304967  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:35.321595  231294 provision.go:143] copyHostCerts
	I0703 23:55:35.321656  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:35.321666  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:35.321726  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:35.321800  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:35.321810  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:35.321832  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:35.321885  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:35.321893  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:35.321911  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:35.321957  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:35.479950  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:35.480009  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:35.480040  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:35.498161  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:38.569162  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37644->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:38.569261  231294 provision.go:87] duration metric: took 3.264352103s to configureAuth
	W0703 23:55:38.569273  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37644->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:38.569291  231294 retry.go:31] will retry after 7.110714686s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:37644->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:45.683644  231294 provision.go:84] configureAuth start
	I0703 23:55:45.683760  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:45.701434  231294 provision.go:143] copyHostCerts
	I0703 23:55:45.701492  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:45.701501  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:45.701559  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:45.701639  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:45.701647  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:45.701669  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:45.701719  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:45.701726  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:45.701745  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:45.701792  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:45.877463  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:45.877517  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:45.877578  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:45.894701  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:48.969150  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:48.969233  231294 provision.go:87] duration metric: took 3.285551699s to configureAuth
	W0703 23:55:48.969242  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:48.969252  231294 retry.go:31] will retry after 5.568030227s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:59078->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:54.540037  231294 provision.go:84] configureAuth start
	I0703 23:55:54.540177  231294 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-167387
	I0703 23:55:54.558209  231294 provision.go:143] copyHostCerts
	I0703 23:55:54.558264  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem, removing ...
	I0703 23:55:54.558271  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem
	I0703 23:55:54.558327  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/ca.pem (1082 bytes)
	I0703 23:55:54.558412  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem, removing ...
	I0703 23:55:54.558419  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem
	I0703 23:55:54.558442  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/cert.pem (1123 bytes)
	I0703 23:55:54.558519  231294 exec_runner.go:144] found /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem, removing ...
	I0703 23:55:54.558526  231294 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem
	I0703 23:55:54.558545  231294 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18859-12140/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18859-12140/.minikube/key.pem (1679 bytes)
	I0703 23:55:54.558591  231294 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18859-12140/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18859-12140/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-167387 san=[127.0.0.1 172.17.0.2 localhost minikube missing-upgrade-167387]
	I0703 23:55:54.628215  231294 provision.go:177] copyRemoteCerts
	I0703 23:55:54.628270  231294 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 23:55:54.628304  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:54.645421  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:55:57.705241  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:52930->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:57.705318  231294 provision.go:87] duration metric: took 3.165234466s to configureAuth
	W0703 23:55:57.705327  231294 ubuntu.go:180] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52930->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:57.705343  231294 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52930->127.0.0.1:33033: read: connection reset by peer
	I0703 23:55:57.705361  231294 machine.go:97] duration metric: took 14m51.235417716s to provisionDockerMachine
	I0703 23:55:57.705452  231294 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:55:57.705488  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:55:57.723266  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:56:00.777309  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:52936->127.0.0.1:33033: read: connection reset by peer
	W0703 23:56:00.777395  231294 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52936->127.0.0.1:33033: read: connection reset by peer
	W0703 23:56:00.777409  231294 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52936->127.0.0.1:33033: read: connection reset by peer
	I0703 23:56:00.777463  231294 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0703 23:56:00.777517  231294 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-167387
	I0703 23:56:00.795048  231294 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33033 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/missing-upgrade-167387/id_rsa Username:docker}
	W0703 23:56:03.849193  231294 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:47388->127.0.0.1:33033: read: connection reset by peer
	W0703 23:56:03.849277  231294 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47388->127.0.0.1:33033: read: connection reset by peer
	W0703 23:56:03.849294  231294 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47388->127.0.0.1:33033: read: connection reset by peer
	I0703 23:56:03.849301  231294 fix.go:56] duration metric: took 14m57.399166083s for fixHost
	I0703 23:56:03.849320  231294 start.go:83] releasing machines lock for "missing-upgrade-167387", held for 14m57.399207121s
	W0703 23:56:03.849473  231294 out.go:239] * Failed to start docker container. Running "minikube delete -p missing-upgrade-167387" may fix it: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52930->127.0.0.1:33033: read: connection reset by peer
	* Failed to start docker container. Running "minikube delete -p missing-upgrade-167387" may fix it: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52930->127.0.0.1:33033: read: connection reset by peer
	I0703 23:56:03.851378  231294 out.go:177] 
	W0703 23:56:03.852502  231294 out.go:239] X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52930->127.0.0.1:33033: read: connection reset by peer
	X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:52930->127.0.0.1:33033: read: connection reset by peer
	W0703 23:56:03.852518  231294 out.go:239] * 
	* 
	W0703 23:56:03.853421  231294 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0703 23:56:03.854628  231294 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:331: failed missing container upgrade from v1.26.0. args: out/minikube-linux-amd64 start -p missing-upgrade-167387 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd : exit status 80
version_upgrade_test.go:333: *** TestMissingContainerUpgrade FAILED at 2024-07-03 23:56:03.972410559 +0000 UTC m=+3273.889794555
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-167387
helpers_test.go:235: (dbg) docker inspect missing-upgrade-167387:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "355630180ed7baec32dcb29e323591f0ffcf1c3e285ef3f8e9e381eade21c575",
	        "Created": "2024-07-03T23:34:55.540037127Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 238296,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-03T23:34:55.682844037Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:ff7b11088f07b5cc3be2087261aee1627a466228279acbafdb95902df26942d2",
	        "ResolvConfPath": "/var/lib/docker/containers/355630180ed7baec32dcb29e323591f0ffcf1c3e285ef3f8e9e381eade21c575/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/355630180ed7baec32dcb29e323591f0ffcf1c3e285ef3f8e9e381eade21c575/hostname",
	        "HostsPath": "/var/lib/docker/containers/355630180ed7baec32dcb29e323591f0ffcf1c3e285ef3f8e9e381eade21c575/hosts",
	        "LogPath": "/var/lib/docker/containers/355630180ed7baec32dcb29e323591f0ffcf1c3e285ef3f8e9e381eade21c575/355630180ed7baec32dcb29e323591f0ffcf1c3e285ef3f8e9e381eade21c575-json.log",
	        "Name": "/missing-upgrade-167387",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-167387:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "bridge",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4b37ccc806fe956d8b3016b45870e9a3cc812bebf04291e5912d6c8f2381e093-init/diff:/var/lib/docker/overlay2/b9962fd18a0f90a25db61dfdeeff98588689598f33878cf02afb69e8733f581f/diff:/var/lib/docker/overlay2/c97f8833c0684d823fd8ca96a4a0e08bbc44dc87af485ade5505e3fbdb93f987/diff:/var/lib/docker/overlay2/917dac2ebf0608f426f8c5abb6e5e5e253cede19f374e69658d57679027d0b64/diff:/var/lib/docker/overlay2/1581ce24c86a33dba358b1545c9e3ef9164e703e887de36e6ab3d68ae30fa677/diff:/var/lib/docker/overlay2/432878fc80366318c106f5fb9e5bb66c2b2d65351a8ebfd3fa9700cbac6321fd/diff:/var/lib/docker/overlay2/91de4accb83758b16d5a548c651c9e762da65deaed57005d2da1d6b1d104238e/diff:/var/lib/docker/overlay2/0f62a9893e67a13be5e58fd36d171797cf139a46b38774bdf7b6419fdad83286/diff:/var/lib/docker/overlay2/bd9d3f570579382155e6d4c9e7ba86211668253777762aa1900a88282e393443/diff:/var/lib/docker/overlay2/5e45fa745b908848f3ef2981ea61465d0d4218ffcf8ce45119486657f92fefc2/diff:/var/lib/docker/overlay2/36f543
09abdadba8bad16b6fcf24487346178600d17b5c5df6de7c5b7300f8c7/diff:/var/lib/docker/overlay2/537474c46d1095a5e9dab714a33cc42eb3d7d35f53ba13013b2375bfa5624ff0/diff:/var/lib/docker/overlay2/8e47004e7db2700c78597ef39cc741c053fb103c71f58842cd10781ce6573bac/diff:/var/lib/docker/overlay2/725abf740915c09d51ee23e311787bf39d2e74cdf1207f6100fc2cc2e7e4a01e/diff:/var/lib/docker/overlay2/2395fa363f014059094dea7cc1cc491a000d1004e1805702347a56f06df2a4e1/diff:/var/lib/docker/overlay2/348d4088b761ffbd120081cb9ebe8d6406318af2635cba310e733bc750c5054e/diff:/var/lib/docker/overlay2/4ced177e3736dcc9f99c06c0bcd396c0779a8965ae2ff0786cabb61cdbab4010/diff:/var/lib/docker/overlay2/f64c2e4101f027a8f10cfbbc93350c425b04c1033ed5ef2f225073944ae847e6/diff:/var/lib/docker/overlay2/4669d0e1325d40e4edc9286c59973be844da35719405366b20bc4841a7f7d52f/diff:/var/lib/docker/overlay2/4bbcdae6f1824d16478402e4e94ec0689522d187340c1980adb4e4e2a64c9dbe/diff:/var/lib/docker/overlay2/d22c54d12c28c17aa1117b47898e4474c73e9c67720a90bf912036a71e08b139/diff:/var/lib/d
ocker/overlay2/3f02e3ce2cd7c81e8f6f16544ed1a252629255aab8c6355f0aec01d7e8040109/diff:/var/lib/docker/overlay2/90a02dbed8abc4083771aa00d1320966c605c910654ed53ca3a6b3b8ba0b61fe/diff:/var/lib/docker/overlay2/4abdeb988d7e98eaedbbdcdf653d6e31bb63be20a4de6d841ee9646d967f2b47/diff:/var/lib/docker/overlay2/585c51e8d1507ca643763c108b55625db26f6e891649047ef0b3407bd81fe3ac/diff:/var/lib/docker/overlay2/77d39a4b10288df9de38622e3bc69f9abb338f34add6d70784b988bda32f7cd0/diff:/var/lib/docker/overlay2/b47749ff333d378e8ef0c6ebb6889f28f0ea06a96c170c5c00bd22c51b6e89f4/diff:/var/lib/docker/overlay2/dbb53883e3dd6c897ca8528b5702bc01b518e8b862c3c79491f5be7aedec5b5b/diff:/var/lib/docker/overlay2/d654ddd0c17ef7abdbefdc6232ca6118e0dbad7f5440a2e9d1296c4df41cd4ee/diff:/var/lib/docker/overlay2/dff1f9788904940c4e45d52ac5c1bd8be93a0289cd369aaea49b2eb55c09ec30/diff:/var/lib/docker/overlay2/219da51f7cc6b5b374ce1b576972e2a60a171c15ac4d3f901166bd2714bcecad/diff:/var/lib/docker/overlay2/051537ea6e9bc1f9df7573c2d74fc816890f2fec34f0744cfa19cd97a55
96da9/diff:/var/lib/docker/overlay2/50d94c8d87b930af001c11f41e61cfcd99626dc5ecf3e30db12b2a1a51e6236f/diff:/var/lib/docker/overlay2/bfd2442c7d6f11fea7283bab4bc0e4ca6515fb96f7db847e2308557aef5373a8/diff:/var/lib/docker/overlay2/cfe894e25e2900f2ddeaa1ddbfa3f7f4853e57c1f9a1a258b94a2d9dd91b3eff/diff:/var/lib/docker/overlay2/f84c75e2fe8974e3029d8fcd16bd95bcea8b60ff51501483f39dbbd5dd3208d0/diff:/var/lib/docker/overlay2/55864eba553b3ed567d4a45bddbed49017fb7f69f749b6a1632dd57888163ced/diff:/var/lib/docker/overlay2/cd641c1bf661a845c39e673cb5eece897bb4c860aa93250e5afbf110579ced66/diff:/var/lib/docker/overlay2/8c755e3dc0c4d18c5a892ddc1555972b28591ccf21acd9704b5adb2c1b1197fa/diff:/var/lib/docker/overlay2/162a48af12b555aeb4101c8f6848a3a90deaa7afa4f69ed2f47f918d8be71197/diff:/var/lib/docker/overlay2/6f976680ccb19921387b884f67a5e3f64ba9d4ee3a75b2e9e42c70ee09be71b9/diff:/var/lib/docker/overlay2/78a3700ee0b563953d55f4930a440d89274e49ba7c6ddd647f392d6b727a6ea9/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b37ccc806fe956d8b3016b45870e9a3cc812bebf04291e5912d6c8f2381e093/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b37ccc806fe956d8b3016b45870e9a3cc812bebf04291e5912d6c8f2381e093/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b37ccc806fe956d8b3016b45870e9a3cc812bebf04291e5912d6c8f2381e093/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-167387",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-167387/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-167387",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-167387",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-167387",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "aa281f89467776142b8d6e0afec0c9fadea59098d26f3c98d49f50d366d215bc",
	            "SandboxKey": "/var/run/docker/netns/aa281f894677",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33033"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33034"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33037"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33035"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33036"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "6c6237bd986506fd5d171f6c183233cebd080239263db137c43eac93a4e7f047",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null,
	                    "NetworkID": "b033bba3b809c72b8e6e95fb316831f5278cba3454561259859f598d35ab5426",
	                    "EndpointID": "6c6237bd986506fd5d171f6c183233cebd080239263db137c43eac93a4e7f047",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p missing-upgrade-167387 -n missing-upgrade-167387
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p missing-upgrade-167387 -n missing-upgrade-167387: exit status 3 (2.931867657s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 23:56:06.921304  386089 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47394->127.0.0.1:33033: read: connection reset by peer
	E0703 23:56:06.921322  386089 status.go:249] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:47394->127.0.0.1:33033: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "missing-upgrade-167387" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "missing-upgrade-167387" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-167387
E0703 23:56:14.284909   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-167387: (7.636780153s)
--- FAIL: TestMissingContainerUpgrade (1413.02s)

                                                
                                    

Test pass (304/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.7
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.2/json-events 11.32
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.05
18 TestDownloadOnly/v1.30.2/DeleteAll 0.19
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.11
20 TestDownloadOnlyKic 1.04
21 TestBinaryMirror 0.68
22 TestOffline 56.26
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 211.8
29 TestAddons/parallel/Registry 14.76
30 TestAddons/parallel/Ingress 18.88
31 TestAddons/parallel/InspektorGadget 10.72
32 TestAddons/parallel/MetricsServer 5.62
33 TestAddons/parallel/HelmTiller 13.94
35 TestAddons/parallel/CSI 50.7
36 TestAddons/parallel/Headlamp 12.73
37 TestAddons/parallel/CloudSpanner 6.47
38 TestAddons/parallel/LocalPath 58.01
39 TestAddons/parallel/NvidiaDevicePlugin 6.45
40 TestAddons/parallel/Yakd 6
41 TestAddons/parallel/Volcano 39.01
44 TestAddons/serial/GCPAuth/Namespaces 0.11
45 TestAddons/StoppedEnableDisable 12.04
46 TestCertOptions 22.66
47 TestCertExpiration 213.28
49 TestForceSystemdFlag 23.5
50 TestForceSystemdEnv 27.39
51 TestDockerEnvContainerd 35.74
52 TestKVMDriverInstallOrUpdate 3.15
56 TestErrorSpam/setup 22.75
57 TestErrorSpam/start 0.53
58 TestErrorSpam/status 0.8
59 TestErrorSpam/pause 1.4
60 TestErrorSpam/unpause 1.37
61 TestErrorSpam/stop 1.32
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 52.25
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 5.07
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
73 TestFunctional/serial/CacheCmd/cache/add_local 1.96
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
81 TestFunctional/serial/ExtraConfig 44.42
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.19
84 TestFunctional/serial/LogsFileCmd 1.21
85 TestFunctional/serial/InvalidService 3.53
87 TestFunctional/parallel/ConfigCmd 0.31
88 TestFunctional/parallel/DashboardCmd 21.08
89 TestFunctional/parallel/DryRun 0.36
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 0.89
95 TestFunctional/parallel/ServiceCmdConnect 60.47
96 TestFunctional/parallel/AddonsCmd 0.12
97 TestFunctional/parallel/PersistentVolumeClaim 88.2
99 TestFunctional/parallel/SSHCmd 0.6
100 TestFunctional/parallel/CpCmd 1.68
101 TestFunctional/parallel/MySQL 24.49
102 TestFunctional/parallel/FileSync 0.26
103 TestFunctional/parallel/CertSync 1.7
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.47
111 TestFunctional/parallel/License 0.57
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 60.23
117 TestFunctional/parallel/ServiceCmd/DeployApp 60.13
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
119 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
123 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
125 TestFunctional/parallel/ProfileCmd/profile_list 0.36
126 TestFunctional/parallel/ServiceCmd/List 0.5
127 TestFunctional/parallel/MountCmd/any-port 8.66
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
131 TestFunctional/parallel/ServiceCmd/Format 0.34
132 TestFunctional/parallel/ServiceCmd/URL 0.37
133 TestFunctional/parallel/MountCmd/specific-port 1.98
134 TestFunctional/parallel/MountCmd/VerifyCleanup 1.83
135 TestFunctional/parallel/Version/short 0.05
136 TestFunctional/parallel/Version/components 0.46
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.16
142 TestFunctional/parallel/ImageCommands/Setup 2.19
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.19
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.05
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.23
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.73
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.43
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.76
153 TestFunctional/delete_addon-resizer_images 0.07
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 95.56
160 TestMultiControlPlane/serial/DeployApp 29.87
161 TestMultiControlPlane/serial/PingHostFromPods 0.94
162 TestMultiControlPlane/serial/AddWorkerNode 17.45
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.61
165 TestMultiControlPlane/serial/CopyFile 14.98
166 TestMultiControlPlane/serial/StopSecondaryNode 12.43
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.45
168 TestMultiControlPlane/serial/RestartSecondaryNode 15.2
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.62
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 117.71
171 TestMultiControlPlane/serial/DeleteSecondaryNode 9.79
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.43
173 TestMultiControlPlane/serial/StopCluster 35.52
174 TestMultiControlPlane/serial/RestartCluster 44.92
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.44
176 TestMultiControlPlane/serial/AddSecondaryNode 36.2
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.61
181 TestJSONOutput/start/Command 50.07
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.63
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.56
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.64
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.19
206 TestKicCustomNetwork/create_custom_network 37.01
207 TestKicCustomNetwork/use_default_bridge_network 23.04
208 TestKicExistingNetwork 24.91
209 TestKicCustomSubnet 26.46
210 TestKicStaticIP 25.56
211 TestMainNoArgs 0.04
212 TestMinikubeProfile 44.31
215 TestMountStart/serial/StartWithMountFirst 5.21
216 TestMountStart/serial/VerifyMountFirst 0.23
217 TestMountStart/serial/StartWithMountSecond 7.92
218 TestMountStart/serial/VerifyMountSecond 0.23
219 TestMountStart/serial/DeleteFirst 1.55
220 TestMountStart/serial/VerifyMountPostDelete 0.23
221 TestMountStart/serial/Stop 1.16
222 TestMountStart/serial/RestartStopped 7.13
223 TestMountStart/serial/VerifyMountPostStop 0.24
226 TestMultiNode/serial/FreshStart2Nodes 60.48
227 TestMultiNode/serial/DeployApp2Nodes 3.91
228 TestMultiNode/serial/PingHostFrom2Pods 0.65
229 TestMultiNode/serial/AddNode 17.57
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.27
232 TestMultiNode/serial/CopyFile 8.55
233 TestMultiNode/serial/StopNode 2.04
234 TestMultiNode/serial/StartAfterStop 8.33
235 TestMultiNode/serial/RestartKeepsNodes 81.68
236 TestMultiNode/serial/DeleteNode 4.93
237 TestMultiNode/serial/StopMultiNode 23.71
238 TestMultiNode/serial/RestartMultiNode 49.31
239 TestMultiNode/serial/ValidateNameConflict 22.49
244 TestPreload 133.13
246 TestScheduledStopUnix 98.17
249 TestInsufficientStorage 9.2
250 TestRunningBinaryUpgrade 59.68
252 TestKubernetesUpgrade 318.91
255 TestStoppedBinaryUpgrade/Setup 2.36
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
257 TestNoKubernetes/serial/StartWithK8s 28.33
258 TestStoppedBinaryUpgrade/Upgrade 138.27
259 TestNoKubernetes/serial/StartWithStopK8s 22.73
260 TestNoKubernetes/serial/Start 7.53
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
262 TestNoKubernetes/serial/ProfileList 0.56
263 TestNoKubernetes/serial/Stop 3.19
264 TestNoKubernetes/serial/StartNoArgs 7.14
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
273 TestNetworkPlugins/group/false 2.88
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.75
286 TestPause/serial/Start 47.57
287 TestPause/serial/SecondStartNoReconfiguration 5.47
288 TestPause/serial/Pause 0.61
289 TestPause/serial/VerifyStatus 0.27
290 TestPause/serial/Unpause 0.55
291 TestPause/serial/PauseAgain 0.67
292 TestPause/serial/DeletePaused 2.44
293 TestPause/serial/VerifyDeletedResources 3.58
294 TestNetworkPlugins/group/auto/Start 46.71
295 TestNetworkPlugins/group/auto/KubeletFlags 0.25
296 TestNetworkPlugins/group/auto/NetCatPod 8.21
297 TestNetworkPlugins/group/auto/DNS 0.11
298 TestNetworkPlugins/group/auto/Localhost 0.09
299 TestNetworkPlugins/group/auto/HairPin 0.09
300 TestNetworkPlugins/group/kindnet/Start 51.69
301 TestNetworkPlugins/group/calico/Start 57.59
302 TestNetworkPlugins/group/custom-flannel/Start 50.29
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
305 TestNetworkPlugins/group/kindnet/NetCatPod 10.18
306 TestNetworkPlugins/group/calico/ControllerPod 6.01
307 TestNetworkPlugins/group/kindnet/DNS 0.12
308 TestNetworkPlugins/group/kindnet/Localhost 0.1
309 TestNetworkPlugins/group/kindnet/HairPin 0.1
310 TestNetworkPlugins/group/calico/KubeletFlags 0.29
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
312 TestNetworkPlugins/group/calico/NetCatPod 8.18
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.21
314 TestNetworkPlugins/group/calico/DNS 0.12
315 TestNetworkPlugins/group/custom-flannel/DNS 0.14
316 TestNetworkPlugins/group/calico/Localhost 0.12
317 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
318 TestNetworkPlugins/group/calico/HairPin 0.12
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
320 TestNetworkPlugins/group/enable-default-cni/Start 80.06
321 TestNetworkPlugins/group/flannel/Start 58
322 TestNetworkPlugins/group/bridge/Start 38.98
323 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
324 TestNetworkPlugins/group/bridge/NetCatPod 8.2
325 TestNetworkPlugins/group/bridge/DNS 0.11
326 TestNetworkPlugins/group/bridge/Localhost 0.09
327 TestNetworkPlugins/group/bridge/HairPin 0.09
328 TestNetworkPlugins/group/flannel/ControllerPod 6.01
329 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
330 TestNetworkPlugins/group/flannel/NetCatPod 10.17
332 TestStartStop/group/old-k8s-version/serial/FirstStart 137.53
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.69
335 TestNetworkPlugins/group/flannel/DNS 0.12
336 TestNetworkPlugins/group/flannel/Localhost 0.1
337 TestNetworkPlugins/group/flannel/HairPin 0.1
338 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
339 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
340 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
342 TestStartStop/group/no-preload/serial/FirstStart 63.07
344 TestStartStop/group/embed-certs/serial/FirstStart 51.86
345 TestStartStop/group/embed-certs/serial/DeployApp 9.22
346 TestStartStop/group/no-preload/serial/DeployApp 10.21
347 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.81
348 TestStartStop/group/embed-certs/serial/Stop 11.87
349 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.8
350 TestStartStop/group/no-preload/serial/Stop 11.94
351 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
352 TestStartStop/group/embed-certs/serial/SecondStart 262.5
353 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
354 TestStartStop/group/no-preload/serial/SecondStart 262.36
355 TestStartStop/group/old-k8s-version/serial/DeployApp 9.38
356 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
357 TestStartStop/group/old-k8s-version/serial/Stop 12.08
358 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
359 TestStartStop/group/old-k8s-version/serial/SecondStart 311.33
360 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
361 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
363 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
364 TestStartStop/group/embed-certs/serial/Pause 2.54
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
367 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.35
368 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
369 TestStartStop/group/no-preload/serial/Pause 2.8
371 TestStartStop/group/newest-cni/serial/FirstStart 36.24
372 TestStartStop/group/newest-cni/serial/DeployApp 0
373 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.94
374 TestStartStop/group/newest-cni/serial/Stop 1.18
375 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
376 TestStartStop/group/newest-cni/serial/SecondStart 13.38
377 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
378 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
380 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
381 TestStartStop/group/newest-cni/serial/Pause 2.65
382 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.98
383 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.95
384 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
385 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.29
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
389 TestStartStop/group/old-k8s-version/serial/Pause 2.37
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.47
x
+
TestDownloadOnly/v1.20.0/json-events (13.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-693717 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-693717 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.702296684s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-693717
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-693717: exit status 85 (52.455539ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-693717 | jenkins | v1.33.1 | 03 Jul 24 23:01 UTC |          |
	|         | -p download-only-693717        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:01:30
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:01:30.151427   18943 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:01:30.151682   18943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:01:30.151692   18943 out.go:304] Setting ErrFile to fd 2...
	I0703 23:01:30.151696   18943 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:01:30.151864   18943 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	W0703 23:01:30.151987   18943 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18859-12140/.minikube/config/config.json: open /home/jenkins/minikube-integration/18859-12140/.minikube/config/config.json: no such file or directory
	I0703 23:01:30.152572   18943 out.go:298] Setting JSON to true
	I0703 23:01:30.153451   18943 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2632,"bootTime":1720045058,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:01:30.153510   18943 start.go:139] virtualization: kvm guest
	I0703 23:01:30.155841   18943 out.go:97] [download-only-693717] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0703 23:01:30.155958   18943 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18859-12140/.minikube/cache/preloaded-tarball: no such file or directory
	I0703 23:01:30.155985   18943 notify.go:220] Checking for updates...
	I0703 23:01:30.157312   18943 out.go:169] MINIKUBE_LOCATION=18859
	I0703 23:01:30.158789   18943 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:01:30.159900   18943 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	I0703 23:01:30.160975   18943 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	I0703 23:01:30.162073   18943 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0703 23:01:30.163976   18943 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 23:01:30.164230   18943 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:01:30.187668   18943 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0703 23:01:30.187765   18943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:01:30.532519   18943 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-03 23:01:30.524107859 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:01:30.532623   18943 docker.go:295] overlay module found
	I0703 23:01:30.534134   18943 out.go:97] Using the docker driver based on user configuration
	I0703 23:01:30.534155   18943 start.go:297] selected driver: docker
	I0703 23:01:30.534161   18943 start.go:901] validating driver "docker" against <nil>
	I0703 23:01:30.534238   18943 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:01:30.578563   18943 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-03 23:01:30.570728486 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:01:30.578767   18943 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 23:01:30.579242   18943 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0703 23:01:30.579379   18943 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 23:01:30.581171   18943 out.go:169] Using Docker driver with root privileges
	I0703 23:01:30.582326   18943 cni.go:84] Creating CNI manager for ""
	I0703 23:01:30.582345   18943 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0703 23:01:30.582355   18943 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0703 23:01:30.582432   18943 start.go:340] cluster config:
	{Name:download-only-693717 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-693717 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:01:30.583633   18943 out.go:97] Starting "download-only-693717" primary control-plane node in "download-only-693717" cluster
	I0703 23:01:30.583650   18943 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0703 23:01:30.584670   18943 out.go:97] Pulling base image v0.0.44-1719972989-19184 ...
	I0703 23:01:30.584689   18943 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0703 23:01:30.584732   18943 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 23:01:30.601581   18943 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 to local cache
	I0703 23:01:30.601754   18943 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local cache directory
	I0703 23:01:30.601847   18943 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 to local cache
	I0703 23:01:30.684610   18943 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	I0703 23:01:30.684634   18943 cache.go:56] Caching tarball of preloaded images
	I0703 23:01:30.684777   18943 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0703 23:01:30.686391   18943 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0703 23:01:30.686404   18943 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4 ...
	I0703 23:01:30.789684   18943 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:c28dc5b6f01e4b826afa7afc8a0fd1fd -> /home/jenkins/minikube-integration/18859-12140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-693717 host does not exist
	  To start a cluster, run: "minikube start -p download-only-693717"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-693717
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (11.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-579179 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-579179 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.323867985s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (11.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-579179
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-579179: exit status 85 (52.947669ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-693717 | jenkins | v1.33.1 | 03 Jul 24 23:01 UTC |                     |
	|         | -p download-only-693717        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Jul 24 23:01 UTC | 03 Jul 24 23:01 UTC |
	| delete  | -p download-only-693717        | download-only-693717 | jenkins | v1.33.1 | 03 Jul 24 23:01 UTC | 03 Jul 24 23:01 UTC |
	| start   | -o=json --download-only        | download-only-579179 | jenkins | v1.33.1 | 03 Jul 24 23:01 UTC |                     |
	|         | -p download-only-579179        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 23:01:44
	Running on machine: ubuntu-20-agent-8
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 23:01:44.213309   19308 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:01:44.213413   19308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:01:44.213422   19308 out.go:304] Setting ErrFile to fd 2...
	I0703 23:01:44.213426   19308 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:01:44.213576   19308 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:01:44.214099   19308 out.go:298] Setting JSON to true
	I0703 23:01:44.214891   19308 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":2646,"bootTime":1720045058,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:01:44.214943   19308 start.go:139] virtualization: kvm guest
	I0703 23:01:44.217060   19308 out.go:97] [download-only-579179] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:01:44.217180   19308 notify.go:220] Checking for updates...
	I0703 23:01:44.218529   19308 out.go:169] MINIKUBE_LOCATION=18859
	I0703 23:01:44.219605   19308 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:01:44.220839   19308 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	I0703 23:01:44.221987   19308 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	I0703 23:01:44.223156   19308 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0703 23:01:44.225237   19308 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 23:01:44.225440   19308 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:01:44.245639   19308 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0703 23:01:44.245733   19308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:01:44.289465   19308 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-03 23:01:44.281157475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:01:44.289589   19308 docker.go:295] overlay module found
	I0703 23:01:44.291321   19308 out.go:97] Using the docker driver based on user configuration
	I0703 23:01:44.291343   19308 start.go:297] selected driver: docker
	I0703 23:01:44.291348   19308 start.go:901] validating driver "docker" against <nil>
	I0703 23:01:44.291422   19308 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:01:44.338738   19308 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-03 23:01:44.330339127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:01:44.338904   19308 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 23:01:44.339360   19308 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0703 23:01:44.339528   19308 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 23:01:44.340979   19308 out.go:169] Using Docker driver with root privileges
	I0703 23:01:44.342024   19308 cni.go:84] Creating CNI manager for ""
	I0703 23:01:44.342040   19308 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0703 23:01:44.342052   19308 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0703 23:01:44.342116   19308 start.go:340] cluster config:
	{Name:download-only-579179 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:download-only-579179 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:01:44.343280   19308 out.go:97] Starting "download-only-579179" primary control-plane node in "download-only-579179" cluster
	I0703 23:01:44.343295   19308 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0703 23:01:44.344267   19308 out.go:97] Pulling base image v0.0.44-1719972989-19184 ...
	I0703 23:01:44.344287   19308 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0703 23:01:44.344385   19308 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local docker daemon
	I0703 23:01:44.359412   19308 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 to local cache
	I0703 23:01:44.359524   19308 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local cache directory
	I0703 23:01:44.359540   19308 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 in local cache directory, skipping pull
	I0703 23:01:44.359544   19308 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 exists in cache, skipping pull
	I0703 23:01:44.359551   19308 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 as a tarball
	I0703 23:01:44.440843   19308 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4
	I0703 23:01:44.440871   19308 cache.go:56] Caching tarball of preloaded images
	I0703 23:01:44.441000   19308 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime containerd
	I0703 23:01:44.442535   19308 out.go:97] Downloading Kubernetes v1.30.2 preload ...
	I0703 23:01:44.442550   19308 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 ...
	I0703 23:01:44.545060   19308 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.2/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:a69e65264a76d4a498a2c6efe8e151d6 -> /home/jenkins/minikube-integration/18859-12140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4
	I0703 23:01:54.003122   19308 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 ...
	I0703 23:01:54.003214   19308 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18859-12140/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-containerd-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-579179 host does not exist
	  To start a cluster, run: "minikube start -p download-only-579179"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-579179
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.04s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-080372 --alsologtostderr --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "download-docker-080372" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-080372
--- PASS: TestDownloadOnlyKic (1.04s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-615459 --alsologtostderr --binary-mirror http://127.0.0.1:36525 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-615459" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-615459
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (56.26s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-974444 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-974444 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (54.10497319s)
helpers_test.go:175: Cleaning up "offline-containerd-974444" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-974444
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-974444: (2.15738188s)
--- PASS: TestOffline (56.26s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-095645
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-095645: exit status 85 (44.545494ms)

                                                
                                                
-- stdout --
	* Profile "addons-095645" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-095645"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-095645
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-095645: exit status 85 (46.401582ms)

                                                
                                                
-- stdout --
	* Profile "addons-095645" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-095645"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (211.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-095645 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-095645 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m31.800008766s)
--- PASS: TestAddons/Setup (211.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.76s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 12.079351ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9qsqt" [8fe93d69-66d8-4e1a-a50e-d6dea34f628b] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005952236s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-rb8jk" [cc7a715c-ed56-47e0-9d92-2794a1c979f6] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004142251s
addons_test.go:342: (dbg) Run:  kubectl --context addons-095645 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-095645 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-095645 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.91075383s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 ip
2024/07/03 23:05:43 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.76s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-095645 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-095645 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-095645 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7cd86171-b2ea-49fa-85e5-da857aacee54] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7cd86171-b2ea-49fa-85e5-da857aacee54] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.002944928s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-095645 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-095645 addons disable ingress-dns --alsologtostderr -v=1: (1.107827522s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-095645 addons disable ingress --alsologtostderr -v=1: (7.659364903s)
--- PASS: TestAddons/parallel/Ingress (18.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-g8bpf" [477666d9-ef6c-4db1-bc51-cdc15157c318] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003866451s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-095645
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-095645: (5.711121628s)
--- PASS: TestAddons/parallel/InspektorGadget (10.72s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.386236ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-zz7kv" [31762fad-0e8f-47ff-9eab-f7451a30ad26] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004380976s
addons_test.go:417: (dbg) Run:  kubectl --context addons-095645 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.94s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.719647ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-b8bmv" [5cb261f1-f189-4658-be58-0ed291c8bc99] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004575652s
addons_test.go:475: (dbg) Run:  kubectl --context addons-095645 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-095645 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.332306219s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.94s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.400794ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-095645 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-095645 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a7329be7-c477-44da-acea-9925fd7a1add] Pending
helpers_test.go:344: "task-pv-pod" [a7329be7-c477-44da-acea-9925fd7a1add] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a7329be7-c477-44da-acea-9925fd7a1add] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003540184s
addons_test.go:586: (dbg) Run:  kubectl --context addons-095645 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-095645 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-095645 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-095645 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-095645 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-095645 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-095645 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1bd2d919-b2a4-4f0c-91dd-d909100b62ba] Pending
helpers_test.go:344: "task-pv-pod-restore" [1bd2d919-b2a4-4f0c-91dd-d909100b62ba] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1bd2d919-b2a4-4f0c-91dd-d909100b62ba] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003254973s
addons_test.go:628: (dbg) Run:  kubectl --context addons-095645 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-095645 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-095645 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-095645 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.513308705s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-095645 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-tc64v" [3d390839-0617-442a-a519-5bdc40fe1f9c] Pending
helpers_test.go:344: "headlamp-7867546754-tc64v" [3d390839-0617-442a-a519-5bdc40fe1f9c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-tc64v" [3d390839-0617-442a-a519-5bdc40fe1f9c] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.003684613s
--- PASS: TestAddons/parallel/Headlamp (12.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-9sqwh" [362eb864-64aa-4ff1-b2ce-4c09350ce153] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003105941s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-095645
--- PASS: TestAddons/parallel/CloudSpanner (6.47s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.01s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-095645 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-095645 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [dadf4394-8019-4667-9f70-15dd571a8131] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [dadf4394-8019-4667-9f70-15dd571a8131] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [dadf4394-8019-4667-9f70-15dd571a8131] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003116219s
addons_test.go:992: (dbg) Run:  kubectl --context addons-095645 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 ssh "cat /opt/local-path-provisioner/pvc-e483ccf9-c0b1-44df-b471-8dfc25037ccd_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-095645 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-095645 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-linux-amd64 -p addons-095645 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.111332587s)
--- PASS: TestAddons/parallel/LocalPath (58.01s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-r5gvr" [fb22e6ec-b0bf-4b60-adb5-f0a3bf313c52] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.00402201s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-095645
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-zkmxp" [0a973fd9-02ba-47c3-b0d9-611566b92ff2] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003742785s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (39.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:897: volcano-admission stabilized in 11.879305ms
addons_test.go:905: volcano-controller stabilized in 11.942669ms
addons_test.go:889: volcano-scheduler stabilized in 12.235686ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-9ngbq" [399d9543-739d-4506-8269-d8a314340be9] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.00488294s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-c8vgk" [b7586213-c818-41a6-b3eb-4d6460af01e4] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.003205624s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-7sxt9" [9029a945-b805-4e41-ab8d-eb77172f7319] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003475081s
addons_test.go:924: (dbg) Run:  kubectl --context addons-095645 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-095645 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-095645 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [14145f29-6605-4e41-89b0-39792280ebec] Pending
helpers_test.go:344: "test-job-nginx-0" [14145f29-6605-4e41-89b0-39792280ebec] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [14145f29-6605-4e41-89b0-39792280ebec] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 13.003435818s
addons_test.go:960: (dbg) Run:  out/minikube-linux-amd64 -p addons-095645 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-amd64 -p addons-095645 addons disable volcano --alsologtostderr -v=1: (10.69799414s)
--- PASS: TestAddons/parallel/Volcano (39.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-095645 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-095645 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.04s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-095645
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-095645: (11.810630356s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-095645
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-095645
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-095645
--- PASS: TestAddons/StoppedEnableDisable (12.04s)

                                                
                                    
x
+
TestCertOptions (22.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-410480 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-410480 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (20.276184625s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-410480 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-410480 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-410480 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-410480" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-410480
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-410480: (1.839934724s)
--- PASS: TestCertOptions (22.66s)

                                                
                                    
x
+
TestCertExpiration (213.28s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-497340 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-497340 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (25.490922823s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-497340 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-497340 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (5.471822502s)
helpers_test.go:175: Cleaning up "cert-expiration-497340" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-497340
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-497340: (2.316893229s)
--- PASS: TestCertExpiration (213.28s)

                                                
                                    
x
+
TestForceSystemdFlag (23.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-201036 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-201036 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (21.442025184s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-201036 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-201036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-201036
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-201036: (1.822366512s)
--- PASS: TestForceSystemdFlag (23.50s)

                                                
                                    
x
+
TestForceSystemdEnv (27.39s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-787865 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-787865 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (24.755038236s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-787865 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-787865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-787865
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-787865: (2.384363839s)
--- PASS: TestForceSystemdEnv (27.39s)

                                                
                                    
x
+
TestDockerEnvContainerd (35.74s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux amd64
docker_test.go:181: (dbg) Run:  out/minikube-linux-amd64 start -p dockerenv-317313 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-amd64 start -p dockerenv-317313 --driver=docker  --container-runtime=containerd: (20.375630508s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-amd64 docker-env --ssh-host --ssh-add -p dockerenv-317313"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-7GtRbeqlacUQ/agent.44847" SSH_AGENT_PID="44848" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-7GtRbeqlacUQ/agent.44847" SSH_AGENT_PID="44848" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-7GtRbeqlacUQ/agent.44847" SSH_AGENT_PID="44848" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.749958632s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-7GtRbeqlacUQ/agent.44847" SSH_AGENT_PID="44848" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-317313" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p dockerenv-317313
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p dockerenv-317313: (1.810989225s)
--- PASS: TestDockerEnvContainerd (35.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.15s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.15s)

                                                
                                    
x
+
TestErrorSpam/setup (22.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-402121 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-402121 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-402121 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-402121 --driver=docker  --container-runtime=containerd: (22.745096827s)
--- PASS: TestErrorSpam/setup (22.75s)

                                                
                                    
x
+
TestErrorSpam/start (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 start --dry-run
--- PASS: TestErrorSpam/start (0.53s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 pause
--- PASS: TestErrorSpam/pause (1.40s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 unpause
--- PASS: TestErrorSpam/unpause (1.37s)

                                                
                                    
x
+
TestErrorSpam/stop (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 stop: (1.160625929s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-402121 --log_dir /tmp/nospam-402121 stop
--- PASS: TestErrorSpam/stop (1.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18859-12140/.minikube/files/etc/test/nested/copy/18931/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-444884 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-444884 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (52.245861537s)
--- PASS: TestFunctional/serial/StartWithProxy (52.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-444884 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-444884 --alsologtostderr -v=8: (5.072071138s)
functional_test.go:659: soft start took 5.072738151s for "functional-444884" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-444884 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 cache add registry.k8s.io/pause:3.1: (1.077670351s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 cache add registry.k8s.io/pause:3.3: (1.154512536s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-444884 /tmp/TestFunctionalserialCacheCmdcacheadd_local1817182853/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cache add minikube-local-cache-test:functional-444884
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 cache add minikube-local-cache-test:functional-444884: (1.677812974s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cache delete minikube-local-cache-test:functional-444884
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-444884
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (249.026277ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 cache reload: (1.060004307s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 kubectl -- --context functional-444884 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-444884 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-444884 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-444884 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.421158946s)
functional_test.go:757: restart took 44.421288323s for "functional-444884" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-444884 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 logs: (1.190966916s)
--- PASS: TestFunctional/serial/LogsCmd (1.19s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 logs --file /tmp/TestFunctionalserialLogsFileCmd861378910/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 logs --file /tmp/TestFunctionalserialLogsFileCmd861378910/001/logs.txt: (1.207775651s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-444884 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-444884
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-444884: exit status 115 (298.36931ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30650 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-444884 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 config get cpus: exit status 14 (51.630591ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 config get cpus: exit status 14 (48.603294ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (21.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-444884 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-444884 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 65510: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (21.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-444884 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-444884 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (162.648627ms)

                                                
                                                
-- stdout --
	* [functional-444884] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18859
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:11:26.223557   64819 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:11:26.223718   64819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:11:26.223725   64819 out.go:304] Setting ErrFile to fd 2...
	I0703 23:11:26.223732   64819 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:11:26.224127   64819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:11:26.225607   64819 out.go:298] Setting JSON to false
	I0703 23:11:26.226753   64819 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3228,"bootTime":1720045058,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:11:26.226838   64819 start.go:139] virtualization: kvm guest
	I0703 23:11:26.229239   64819 out.go:177] * [functional-444884] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:11:26.230687   64819 out.go:177]   - MINIKUBE_LOCATION=18859
	I0703 23:11:26.230719   64819 notify.go:220] Checking for updates...
	I0703 23:11:26.233718   64819 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:11:26.235055   64819 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	I0703 23:11:26.236289   64819 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	I0703 23:11:26.240329   64819 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:11:26.241748   64819 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:11:26.243457   64819 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 23:11:26.244134   64819 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:11:26.270453   64819 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0703 23:11:26.270563   64819 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:11:26.324136   64819 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-03 23:11:26.310892096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:11:26.324546   64819 docker.go:295] overlay module found
	I0703 23:11:26.326280   64819 out.go:177] * Using the docker driver based on existing profile
	I0703 23:11:26.327564   64819 start.go:297] selected driver: docker
	I0703 23:11:26.327581   64819 start.go:901] validating driver "docker" against &{Name:functional-444884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-444884 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:11:26.327704   64819 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:11:26.330026   64819 out.go:177] 
	W0703 23:11:26.331231   64819 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0703 23:11:26.332547   64819 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-444884 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-444884 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-444884 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (180.688486ms)

                                                
                                                
-- stdout --
	* [functional-444884] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18859
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:11:26.037548   64683 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:11:26.037641   64683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:11:26.037646   64683 out.go:304] Setting ErrFile to fd 2...
	I0703 23:11:26.037652   64683 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:11:26.037937   64683 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:11:26.038433   64683 out.go:298] Setting JSON to false
	I0703 23:11:26.039627   64683 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":3228,"bootTime":1720045058,"procs":249,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:11:26.039713   64683 start.go:139] virtualization: kvm guest
	I0703 23:11:26.041788   64683 out.go:177] * [functional-444884] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0703 23:11:26.043324   64683 out.go:177]   - MINIKUBE_LOCATION=18859
	I0703 23:11:26.043389   64683 notify.go:220] Checking for updates...
	I0703 23:11:26.045731   64683 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:11:26.047026   64683 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	I0703 23:11:26.048149   64683 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	I0703 23:11:26.049394   64683 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:11:26.050667   64683 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:11:26.052372   64683 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 23:11:26.053106   64683 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:11:26.076133   64683 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0703 23:11:26.076226   64683 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:11:26.127154   64683 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-03 23:11:26.116083603 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:11:26.127283   64683 docker.go:295] overlay module found
	I0703 23:11:26.128669   64683 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0703 23:11:26.162925   64683 start.go:297] selected driver: docker
	I0703 23:11:26.162999   64683 start.go:901] validating driver "docker" against &{Name:functional-444884 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-444884 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 23:11:26.163122   64683 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:11:26.165689   64683 out.go:177] 
	W0703 23:11:26.166852   64683 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0703 23:11:26.168100   64683 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (60.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-444884 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-444884 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2lslf" [d314a25f-d418-486e-898d-2ee87b768a8d] Pending
helpers_test.go:344: "hello-node-connect-57b4589c47-2lslf" [d314a25f-d418-486e-898d-2ee87b768a8d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2lslf" [d314a25f-d418-486e-898d-2ee87b768a8d] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 1m0.003363468s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31853
functional_test.go:1671: http://192.168.49.2:31853: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2lslf

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31853
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (60.47s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (88.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7d42615b-e87d-4a98-8c30-eadd85e1bf89] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003657511s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-444884 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-444884 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-444884 get pvc myclaim -o=json
E0703 23:10:29.653224   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:10:29.659101   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:10:29.669347   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:10:29.690089   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:10:29.730344   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:10:29.810644   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:10:29.971305   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:10:30.291964   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:10:30.932735   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-444884 get pvc myclaim -o=json
E0703 23:10:32.213156   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-444884 get pvc myclaim -o=json
E0703 23:10:34.773951   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-444884 get pvc myclaim -o=json
E0703 23:10:39.894637   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-444884 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-444884 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-444884 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4ff41495-27a6-414d-b2cb-a598f069968e] Pending
helpers_test.go:344: "sp-pod" [4ff41495-27a6-414d-b2cb-a598f069968e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: persistentvolumeclaim "myclaim" not found. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "sp-pod" [4ff41495-27a6-414d-b2cb-a598f069968e] Pending
E0703 23:11:10.616033   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [4ff41495-27a6-414d-b2cb-a598f069968e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4ff41495-27a6-414d-b2cb-a598f069968e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 40.006401172s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-444884 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-444884 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-444884 delete -f testdata/storage-provisioner/pod.yaml: (1.43710199s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-444884 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [306d6b74-090d-4581-b346-163b6cdd9e6b] Pending
helpers_test.go:344: "sp-pod" [306d6b74-090d-4581-b346-163b6cdd9e6b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [306d6b74-090d-4581-b346-163b6cdd9e6b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.003551649s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-444884 exec sp-pod -- ls /tmp/mount
E0703 23:11:51.577085   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (88.20s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh -n functional-444884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cp functional-444884:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd261008191/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh -n functional-444884 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh -n functional-444884 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-444884 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-vn27f" [95e228b7-58a5-4207-8c04-3664a33fbc79] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-vn27f" [95e228b7-58a5-4207-8c04-3664a33fbc79] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.004035577s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;": exit status 1 (135.355957ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;": exit status 1 (104.560397ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;": exit status 1 (132.975449ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;": exit status 1 (93.166116ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
2024/07/03 23:11:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1803: (dbg) Run:  kubectl --context functional-444884 exec mysql-64454c8b5c-vn27f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.49s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/18931/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo cat /etc/test/nested/copy/18931/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/18931.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo cat /etc/ssl/certs/18931.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/18931.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo cat /usr/share/ca-certificates/18931.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/189312.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo cat /etc/ssl/certs/189312.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/189312.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo cat /usr/share/ca-certificates/189312.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-444884 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 ssh "sudo systemctl is-active docker": exit status 1 (238.386632ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 ssh "sudo systemctl is-active crio": exit status 1 (234.543256ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-444884 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-444884 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-444884 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-444884 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 61324: os: process already finished
helpers_test.go:508: unable to kill pid 60912: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-444884 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (60.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-444884 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ba63208a-e70c-40b3-b102-b03389cd91ea] Pending
helpers_test.go:344: "nginx-svc" [ba63208a-e70c-40b3-b102-b03389cd91ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ba63208a-e70c-40b3-b102-b03389cd91ea] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 1m0.006395198s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (60.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (60.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-444884 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-444884 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-fbt5q" [c9e697e3-411a-4b1d-882d-36d04227562e] Pending
E0703 23:10:50.135319   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
helpers_test.go:344: "hello-node-6d85cfcfd8-fbt5q" [c9e697e3-411a-4b1d-882d-36d04227562e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-fbt5q" [c9e697e3-411a-4b1d-882d-36d04227562e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 1m0.004785767s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (60.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-444884 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.88.207 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-444884 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "308.253613ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "49.476585ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdany-port3453680798/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1720048284713874225" to /tmp/TestFunctionalparallelMountCmdany-port3453680798/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1720048284713874225" to /tmp/TestFunctionalparallelMountCmdany-port3453680798/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1720048284713874225" to /tmp/TestFunctionalparallelMountCmdany-port3453680798/001/test-1720048284713874225
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.728032ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  3 23:11 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  3 23:11 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  3 23:11 test-1720048284713874225
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh cat /mount-9p/test-1720048284713874225
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-444884 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [c5a1d4af-27e9-46b3-8e46-ad59ac6189b1] Pending
helpers_test.go:344: "busybox-mount" [c5a1d4af-27e9-46b3-8e46-ad59ac6189b1] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [c5a1d4af-27e9-46b3-8e46-ad59ac6189b1] Running
helpers_test.go:344: "busybox-mount" [c5a1d4af-27e9-46b3-8e46-ad59ac6189b1] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [c5a1d4af-27e9-46b3-8e46-ad59ac6189b1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003448862s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-444884 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdany-port3453680798/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "320.356889ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "48.4083ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 service list -o json
functional_test.go:1490: Took "478.786446ms" to run "out/minikube-linux-amd64 -p functional-444884 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:32746
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:32746
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdspecific-port4221799697/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (300.944492ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdspecific-port4221799697/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 ssh "sudo umount -f /mount-9p": exit status 1 (249.25201ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-444884 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdspecific-port4221799697/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2896713052/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2896713052/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2896713052/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T" /mount1: exit status 1 (331.562062ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-444884 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2896713052/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2896713052/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-444884 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2896713052/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.83s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-444884 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-444884
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-444884
docker.io/kindest/kindnetd:v20240513-cd2ac642
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-444884 image ls --format short --alsologtostderr:
I0703 23:11:57.304868   69533 out.go:291] Setting OutFile to fd 1 ...
I0703 23:11:57.304963   69533 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.304970   69533 out.go:304] Setting ErrFile to fd 2...
I0703 23:11:57.304974   69533 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.305178   69533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
I0703 23:11:57.305742   69533 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.305880   69533 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.306252   69533 cli_runner.go:164] Run: docker container inspect functional-444884 --format={{.State.Status}}
I0703 23:11:57.326273   69533 ssh_runner.go:195] Run: systemctl --version
I0703 23:11:57.326331   69533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-444884
I0703 23:11:57.347784   69533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/functional-444884/id_rsa Username:docker}
I0703 23:11:57.436982   69533 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-444884 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-444884  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/echoserver                  | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:3861cf | 57.2MB |
| docker.io/kindest/kindnetd                  | v20240513-cd2ac642 | sha256:ac1c61 | 28.2MB |
| docker.io/library/nginx                     | alpine             | sha256:099a2d | 18.4MB |
| docker.io/library/nginx                     | latest             | sha256:fffffc | 71MB   |
| registry.k8s.io/kube-apiserver              | v1.30.2            | sha256:56ce0f | 32.8MB |
| docker.io/library/mysql                     | 5.7                | sha256:510733 | 138MB  |
| docker.io/library/minikube-local-cache-test | functional-444884  | sha256:3cdf6f | 990B   |
| registry.k8s.io/pause                       | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:cbb01a | 18.2MB |
| registry.k8s.io/kube-proxy                  | v1.30.2            | sha256:53c535 | 29MB   |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.2            | sha256:e87481 | 31.1MB |
| registry.k8s.io/kube-scheduler              | v1.30.2            | sha256:7820c8 | 19.3MB |
| registry.k8s.io/pause                       | 3.1                | sha256:da86e6 | 315kB  |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-444884 image ls --format table --alsologtostderr:
I0703 23:11:57.520635   69758 out.go:291] Setting OutFile to fd 1 ...
I0703 23:11:57.520729   69758 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.520737   69758 out.go:304] Setting ErrFile to fd 2...
I0703 23:11:57.520742   69758 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.520987   69758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
I0703 23:11:57.521582   69758 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.521674   69758 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.522008   69758 cli_runner.go:164] Run: docker container inspect functional-444884 --format={{.State.Status}}
I0703 23:11:57.541497   69758 ssh_runner.go:195] Run: systemctl --version
I0703 23:11:57.541551   69758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-444884
I0703 23:11:57.559276   69758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/functional-444884/id_rsa Username:docker}
I0703 23:11:57.644612   69758 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-444884 image ls --format json --alsologtostderr:
[{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-444884"],"size":"10823156"},{"id":"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"19328121"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c
05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb"],"repoTags":["docker.io/library/mysql:5.7"],"size":"137909886"},{"id":"sha256:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":["docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97
ae40c3dda26fbe98f22b6251acdf28e579560d55"],"repoTags":["docker.io/library/nginx:alpine"],"size":"18403459"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"18182961"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:53c535741fb446f6b34d720fdc5748db368ef96771
111f3892682e6eab8f3772","repoDigests":["registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"29034457"},{"id":"sha256:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f","repoDigests":["docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8"],"repoTags":["docker.io/kindest/kindnetd:v20240513-cd2ac642"],"size":"28194900"},{"id":"sha256:3cdf6fe017fed06dd9a1a3e3cbd01b3a509b455ed3d7f9eb9f7fa5aa9040d0cd","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-444884"],"size":"990"},{"id":"sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":["docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df"],"repoTags":["docker.io/library/nginx:latest"],"size":"70984068"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDiges
ts":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"57236178"},{"id":"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":["registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"32768601"},{"id":"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"31138657"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-444884 image ls --format json --alsologtostderr:
I0703 23:11:57.294285   69532 out.go:291] Setting OutFile to fd 1 ...
I0703 23:11:57.294470   69532 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.294481   69532 out.go:304] Setting ErrFile to fd 2...
I0703 23:11:57.294489   69532 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.294759   69532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
I0703 23:11:57.295553   69532 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.295679   69532 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.296157   69532 cli_runner.go:164] Run: docker container inspect functional-444884 --format={{.State.Status}}
I0703 23:11:57.322485   69532 ssh_runner.go:195] Run: systemctl --version
I0703 23:11:57.322528   69532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-444884
I0703 23:11:57.343738   69532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/functional-444884/id_rsa Username:docker}
I0703 23:11:57.436964   69532 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-444884 image ls --format yaml --alsologtostderr:
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "46237695"
- id: sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests:
- registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "29034457"
- id: sha256:ac1c61439df4625ba53a9ceaccb5eb07a830bdf942cc1c60535a4dd7e763d55f
repoDigests:
- docker.io/kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8
repoTags:
- docker.io/kindest/kindnetd:v20240513-cd2ac642
size: "28194900"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests:
- docker.io/library/nginx@sha256:67682bda769fae1ccf5183192b8daf37b64cae99c6c3302650f6f8bf5f0f95df
repoTags:
- docker.io/library/nginx:latest
size: "70984068"
- id: sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "32768601"
- id: sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "31138657"
- id: sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "19328121"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:3cdf6fe017fed06dd9a1a3e3cbd01b3a509b455ed3d7f9eb9f7fa5aa9040d0cd
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-444884
size: "990"
- id: sha256:5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
repoTags:
- docker.io/library/mysql:5.7
size: "137909886"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "18182961"
- id: sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "57236178"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests:
- docker.io/library/nginx@sha256:a45ee5d042aaa9e81e013f97ae40c3dda26fbe98f22b6251acdf28e579560d55
repoTags:
- docker.io/library/nginx:alpine
size: "18403459"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-444884
size: "10823156"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-444884 image ls --format yaml --alsologtostderr:
I0703 23:11:57.295280   69534 out.go:291] Setting OutFile to fd 1 ...
I0703 23:11:57.295382   69534 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.295390   69534 out.go:304] Setting ErrFile to fd 2...
I0703 23:11:57.295394   69534 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.295563   69534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
I0703 23:11:57.296188   69534 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.296343   69534 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.296974   69534 cli_runner.go:164] Run: docker container inspect functional-444884 --format={{.State.Status}}
I0703 23:11:57.319305   69534 ssh_runner.go:195] Run: systemctl --version
I0703 23:11:57.319357   69534 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-444884
I0703 23:11:57.338076   69534 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/functional-444884/id_rsa Username:docker}
I0703 23:11:57.432744   69534 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-444884 ssh pgrep buildkitd: exit status 1 (266.749248ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image build -t localhost/my-image:functional-444884 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 image build -t localhost/my-image:functional-444884 testdata/build --alsologtostderr: (2.694984837s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-444884 image build -t localhost/my-image:functional-444884 testdata/build --alsologtostderr:
I0703 23:11:57.550372   69771 out.go:291] Setting OutFile to fd 1 ...
I0703 23:11:57.550658   69771 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.550672   69771 out.go:304] Setting ErrFile to fd 2...
I0703 23:11:57.550679   69771 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 23:11:57.551272   69771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
I0703 23:11:57.552446   69771 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.553005   69771 config.go:182] Loaded profile config "functional-444884": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
I0703 23:11:57.553377   69771 cli_runner.go:164] Run: docker container inspect functional-444884 --format={{.State.Status}}
I0703 23:11:57.569953   69771 ssh_runner.go:195] Run: systemctl --version
I0703 23:11:57.569992   69771 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-444884
I0703 23:11:57.588141   69771 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/functional-444884/id_rsa Username:docker}
I0703 23:11:57.673151   69771 build_images.go:161] Building image from path: /tmp/build.3731496979.tar
I0703 23:11:57.673218   69771 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0703 23:11:57.682660   69771 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3731496979.tar
I0703 23:11:57.685651   69771 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3731496979.tar: stat -c "%s %y" /var/lib/minikube/build/build.3731496979.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3731496979.tar': No such file or directory
I0703 23:11:57.685677   69771 ssh_runner.go:362] scp /tmp/build.3731496979.tar --> /var/lib/minikube/build/build.3731496979.tar (3072 bytes)
I0703 23:11:57.707112   69771 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3731496979
I0703 23:11:57.714482   69771 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3731496979 -xf /var/lib/minikube/build/build.3731496979.tar
I0703 23:11:57.722123   69771 containerd.go:394] Building image: /var/lib/minikube/build/build.3731496979
I0703 23:11:57.722190   69771 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3731496979 --local dockerfile=/var/lib/minikube/build/build.3731496979 --output type=image,name=localhost/my-image:functional-444884
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.7s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:dac6f7229bdde40e280e8f6dddc4c0d4900ee7a571c72381da3d953df43c80ca done
#8 exporting config sha256:83808d9aa0da25153e89efba2bbaa521fa135afb26929f052b56cfd85957eff5 done
#8 naming to localhost/my-image:functional-444884 done
#8 DONE 0.1s
I0703 23:12:00.180553   69771 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3731496979 --local dockerfile=/var/lib/minikube/build/build.3731496979 --output type=image,name=localhost/my-image:functional-444884: (2.458321999s)
I0703 23:12:00.180624   69771 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3731496979
I0703 23:12:00.189169   69771 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3731496979.tar
I0703 23:12:00.196847   69771 build_images.go:217] Built localhost/my-image:functional-444884 from /tmp/build.3731496979.tar
I0703 23:12:00.196879   69771 build_images.go:133] succeeded building to: functional-444884
I0703 23:12:00.196885   69771 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.173663679s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-444884
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image load --daemon gcr.io/google-containers/addon-resizer:functional-444884 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 image load --daemon gcr.io/google-containers/addon-resizer:functional-444884 --alsologtostderr: (5.99500932s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image load --daemon gcr.io/google-containers/addon-resizer:functional-444884 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 image load --daemon gcr.io/google-containers/addon-resizer:functional-444884 --alsologtostderr: (2.859240047s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.857327067s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-444884
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image load --daemon gcr.io/google-containers/addon-resizer:functional-444884 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-444884 image load --daemon gcr.io/google-containers/addon-resizer:functional-444884 --alsologtostderr: (3.159071628s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image save gcr.io/google-containers/addon-resizer:functional-444884 /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image rm gcr.io/google-containers/addon-resizer:functional-444884 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image load /home/jenkins/workspace/Docker_Linux_containerd_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-444884
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-444884 image save --daemon gcr.io/google-containers/addon-resizer:functional-444884 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-444884
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.76s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-444884
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-444884
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-444884
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (95.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-701657 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0703 23:13:13.497230   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-701657 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m34.916437621s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (95.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (29.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-701657 -- rollout status deployment/busybox: (28.172413366s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-2ppg5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-blc88 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-prtx9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-2ppg5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-blc88 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-prtx9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-2ppg5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-blc88 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-prtx9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (29.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-2ppg5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-2ppg5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-blc88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-blc88 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-prtx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-701657 -- exec busybox-fc5497c4f-prtx9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (17.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-701657 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-701657 -v=7 --alsologtostderr: (16.666234605s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (17.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-701657 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (14.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp testdata/cp-test.txt ha-701657:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2696390537/001/cp-test_ha-701657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657:/home/docker/cp-test.txt ha-701657-m02:/home/docker/cp-test_ha-701657_ha-701657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m02 "sudo cat /home/docker/cp-test_ha-701657_ha-701657-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657:/home/docker/cp-test.txt ha-701657-m03:/home/docker/cp-test_ha-701657_ha-701657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m03 "sudo cat /home/docker/cp-test_ha-701657_ha-701657-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657:/home/docker/cp-test.txt ha-701657-m04:/home/docker/cp-test_ha-701657_ha-701657-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m04 "sudo cat /home/docker/cp-test_ha-701657_ha-701657-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp testdata/cp-test.txt ha-701657-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2696390537/001/cp-test_ha-701657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m02:/home/docker/cp-test.txt ha-701657:/home/docker/cp-test_ha-701657-m02_ha-701657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657 "sudo cat /home/docker/cp-test_ha-701657-m02_ha-701657.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m02:/home/docker/cp-test.txt ha-701657-m03:/home/docker/cp-test_ha-701657-m02_ha-701657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m03 "sudo cat /home/docker/cp-test_ha-701657-m02_ha-701657-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m02:/home/docker/cp-test.txt ha-701657-m04:/home/docker/cp-test_ha-701657-m02_ha-701657-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m04 "sudo cat /home/docker/cp-test_ha-701657-m02_ha-701657-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp testdata/cp-test.txt ha-701657-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2696390537/001/cp-test_ha-701657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m03:/home/docker/cp-test.txt ha-701657:/home/docker/cp-test_ha-701657-m03_ha-701657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657 "sudo cat /home/docker/cp-test_ha-701657-m03_ha-701657.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m03:/home/docker/cp-test.txt ha-701657-m02:/home/docker/cp-test_ha-701657-m03_ha-701657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m02 "sudo cat /home/docker/cp-test_ha-701657-m03_ha-701657-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m03:/home/docker/cp-test.txt ha-701657-m04:/home/docker/cp-test_ha-701657-m03_ha-701657-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m04 "sudo cat /home/docker/cp-test_ha-701657-m03_ha-701657-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp testdata/cp-test.txt ha-701657-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2696390537/001/cp-test_ha-701657-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m04:/home/docker/cp-test.txt ha-701657:/home/docker/cp-test_ha-701657-m04_ha-701657.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657 "sudo cat /home/docker/cp-test_ha-701657-m04_ha-701657.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m04:/home/docker/cp-test.txt ha-701657-m02:/home/docker/cp-test_ha-701657-m04_ha-701657-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m02 "sudo cat /home/docker/cp-test_ha-701657-m04_ha-701657-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 cp ha-701657-m04:/home/docker/cp-test.txt ha-701657-m03:/home/docker/cp-test_ha-701657-m04_ha-701657-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 ssh -n ha-701657-m03 "sudo cat /home/docker/cp-test_ha-701657-m04_ha-701657-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (14.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-701657 node stop m02 -v=7 --alsologtostderr: (11.805455711s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr: exit status 7 (627.465927ms)

                                                
                                                
-- stdout --
	ha-701657
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-701657-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-701657-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-701657-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:14:54.736576   91015 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:14:54.736696   91015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:14:54.736706   91015 out.go:304] Setting ErrFile to fd 2...
	I0703 23:14:54.736712   91015 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:14:54.736922   91015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:14:54.737139   91015 out.go:298] Setting JSON to false
	I0703 23:14:54.737174   91015 mustload.go:65] Loading cluster: ha-701657
	I0703 23:14:54.737199   91015 notify.go:220] Checking for updates...
	I0703 23:14:54.739014   91015 config.go:182] Loaded profile config "ha-701657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 23:14:54.739038   91015 status.go:255] checking status of ha-701657 ...
	I0703 23:14:54.739443   91015 cli_runner.go:164] Run: docker container inspect ha-701657 --format={{.State.Status}}
	I0703 23:14:54.757357   91015 status.go:330] ha-701657 host status = "Running" (err=<nil>)
	I0703 23:14:54.757380   91015 host.go:66] Checking if "ha-701657" exists ...
	I0703 23:14:54.757617   91015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-701657
	I0703 23:14:54.775016   91015 host.go:66] Checking if "ha-701657" exists ...
	I0703 23:14:54.775342   91015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:14:54.775401   91015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-701657
	I0703 23:14:54.791975   91015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/ha-701657/id_rsa Username:docker}
	I0703 23:14:54.882088   91015 ssh_runner.go:195] Run: systemctl --version
	I0703 23:14:54.885783   91015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:14:54.896245   91015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:14:54.943892   91015 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-07-03 23:14:54.934661181 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:14:54.944548   91015 kubeconfig.go:125] found "ha-701657" server: "https://192.168.49.254:8443"
	I0703 23:14:54.944578   91015 api_server.go:166] Checking apiserver status ...
	I0703 23:14:54.944617   91015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:14:54.954745   91015 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1668/cgroup
	I0703 23:14:54.962918   91015 api_server.go:182] apiserver freezer: "2:freezer:/docker/faa81788bb65a4f8b8f00b1514448e9db9c3e86b2210aefe79a3ecead27d8612/kubepods/burstable/pod633f9d0ae5f02788bc14b8c0db427637/ab50483377fda613fcbe97c8482743667834f4ae31f7938f6483f1da2cb20da7"
	I0703 23:14:54.962962   91015 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/faa81788bb65a4f8b8f00b1514448e9db9c3e86b2210aefe79a3ecead27d8612/kubepods/burstable/pod633f9d0ae5f02788bc14b8c0db427637/ab50483377fda613fcbe97c8482743667834f4ae31f7938f6483f1da2cb20da7/freezer.state
	I0703 23:14:54.970401   91015 api_server.go:204] freezer state: "THAWED"
	I0703 23:14:54.970425   91015 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0703 23:14:54.975361   91015 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0703 23:14:54.975381   91015 status.go:422] ha-701657 apiserver status = Running (err=<nil>)
	I0703 23:14:54.975390   91015 status.go:257] ha-701657 status: &{Name:ha-701657 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:14:54.975405   91015 status.go:255] checking status of ha-701657-m02 ...
	I0703 23:14:54.975630   91015 cli_runner.go:164] Run: docker container inspect ha-701657-m02 --format={{.State.Status}}
	I0703 23:14:54.992196   91015 status.go:330] ha-701657-m02 host status = "Stopped" (err=<nil>)
	I0703 23:14:54.992221   91015 status.go:343] host is not running, skipping remaining checks
	I0703 23:14:54.992229   91015 status.go:257] ha-701657-m02 status: &{Name:ha-701657-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:14:54.992250   91015 status.go:255] checking status of ha-701657-m03 ...
	I0703 23:14:54.992519   91015 cli_runner.go:164] Run: docker container inspect ha-701657-m03 --format={{.State.Status}}
	I0703 23:14:55.008814   91015 status.go:330] ha-701657-m03 host status = "Running" (err=<nil>)
	I0703 23:14:55.008836   91015 host.go:66] Checking if "ha-701657-m03" exists ...
	I0703 23:14:55.009073   91015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-701657-m03
	I0703 23:14:55.025184   91015 host.go:66] Checking if "ha-701657-m03" exists ...
	I0703 23:14:55.025437   91015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:14:55.025469   91015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-701657-m03
	I0703 23:14:55.041679   91015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/ha-701657-m03/id_rsa Username:docker}
	I0703 23:14:55.133768   91015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:14:55.144168   91015 kubeconfig.go:125] found "ha-701657" server: "https://192.168.49.254:8443"
	I0703 23:14:55.144196   91015 api_server.go:166] Checking apiserver status ...
	I0703 23:14:55.144234   91015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:14:55.153640   91015 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1525/cgroup
	I0703 23:14:55.161795   91015 api_server.go:182] apiserver freezer: "2:freezer:/docker/f89af37aa4079f1811e830e02cd48301c853f5e326d5c27f30483c39208dfe81/kubepods/burstable/pod0fa013bb2e067ec83154ae4cb0132d6a/bd7037d7288cb534dbb8e751c588a13b31ad018cc411d068e5fdb97f6e737b5d"
	I0703 23:14:55.161855   91015 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f89af37aa4079f1811e830e02cd48301c853f5e326d5c27f30483c39208dfe81/kubepods/burstable/pod0fa013bb2e067ec83154ae4cb0132d6a/bd7037d7288cb534dbb8e751c588a13b31ad018cc411d068e5fdb97f6e737b5d/freezer.state
	I0703 23:14:55.169101   91015 api_server.go:204] freezer state: "THAWED"
	I0703 23:14:55.169127   91015 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0703 23:14:55.172574   91015 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0703 23:14:55.172596   91015 status.go:422] ha-701657-m03 apiserver status = Running (err=<nil>)
	I0703 23:14:55.172606   91015 status.go:257] ha-701657-m03 status: &{Name:ha-701657-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:14:55.172622   91015 status.go:255] checking status of ha-701657-m04 ...
	I0703 23:14:55.172950   91015 cli_runner.go:164] Run: docker container inspect ha-701657-m04 --format={{.State.Status}}
	I0703 23:14:55.190943   91015 status.go:330] ha-701657-m04 host status = "Running" (err=<nil>)
	I0703 23:14:55.190967   91015 host.go:66] Checking if "ha-701657-m04" exists ...
	I0703 23:14:55.191236   91015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-701657-m04
	I0703 23:14:55.208086   91015 host.go:66] Checking if "ha-701657-m04" exists ...
	I0703 23:14:55.208343   91015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:14:55.208389   91015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-701657-m04
	I0703 23:14:55.224781   91015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/ha-701657-m04/id_rsa Username:docker}
	I0703 23:14:55.313271   91015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:14:55.322981   91015 status.go:257] ha-701657-m04 status: &{Name:ha-701657-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (15.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-701657 node start m02 -v=7 --alsologtostderr: (14.364388722s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (15.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-701657 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-701657 -v=7 --alsologtostderr
E0703 23:15:23.465881   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:23.471141   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:23.481376   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:23.501625   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:23.541900   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:23.622170   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:23.782566   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:24.102987   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:24.743218   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:26.023650   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:28.584889   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:29.653259   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:15:33.705725   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:15:43.946183   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-701657 -v=7 --alsologtostderr: (36.583340067s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-701657 --wait=true -v=7 --alsologtostderr
E0703 23:15:57.339643   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:16:04.427081   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:16:45.387878   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-701657 --wait=true -v=7 --alsologtostderr: (1m21.025996621s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-701657
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-701657 node delete m03 -v=7 --alsologtostderr: (9.065013907s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-701657 stop -v=7 --alsologtostderr: (35.423555329s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr: exit status 7 (94.259848ms)

                                                
                                                
-- stdout --
	ha-701657
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-701657-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-701657-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:17:54.986543  108395 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:17:54.986649  108395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:17:54.986656  108395 out.go:304] Setting ErrFile to fd 2...
	I0703 23:17:54.986661  108395 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:17:54.986870  108395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:17:54.987024  108395 out.go:298] Setting JSON to false
	I0703 23:17:54.987050  108395 mustload.go:65] Loading cluster: ha-701657
	I0703 23:17:54.987148  108395 notify.go:220] Checking for updates...
	I0703 23:17:54.987437  108395 config.go:182] Loaded profile config "ha-701657": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 23:17:54.987455  108395 status.go:255] checking status of ha-701657 ...
	I0703 23:17:54.987929  108395 cli_runner.go:164] Run: docker container inspect ha-701657 --format={{.State.Status}}
	I0703 23:17:55.006297  108395 status.go:330] ha-701657 host status = "Stopped" (err=<nil>)
	I0703 23:17:55.006321  108395 status.go:343] host is not running, skipping remaining checks
	I0703 23:17:55.006327  108395 status.go:257] ha-701657 status: &{Name:ha-701657 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:17:55.006369  108395 status.go:255] checking status of ha-701657-m02 ...
	I0703 23:17:55.006591  108395 cli_runner.go:164] Run: docker container inspect ha-701657-m02 --format={{.State.Status}}
	I0703 23:17:55.023221  108395 status.go:330] ha-701657-m02 host status = "Stopped" (err=<nil>)
	I0703 23:17:55.023253  108395 status.go:343] host is not running, skipping remaining checks
	I0703 23:17:55.023259  108395 status.go:257] ha-701657-m02 status: &{Name:ha-701657-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:17:55.023276  108395 status.go:255] checking status of ha-701657-m04 ...
	I0703 23:17:55.023506  108395 cli_runner.go:164] Run: docker container inspect ha-701657-m04 --format={{.State.Status}}
	I0703 23:17:55.039524  108395 status.go:330] ha-701657-m04 host status = "Stopped" (err=<nil>)
	I0703 23:17:55.039544  108395 status.go:343] host is not running, skipping remaining checks
	I0703 23:17:55.039550  108395 status.go:257] ha-701657-m04 status: &{Name:ha-701657-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (44.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-701657 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0703 23:18:07.308581   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-701657 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (44.18660333s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (44.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (36.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-701657 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-701657 --control-plane -v=7 --alsologtostderr: (35.412906344s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-701657 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (36.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.07s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-676798 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-676798 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.064755354s)
--- PASS: TestJSONOutput/start/Command (50.07s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-676798 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-676798 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.64s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-676798 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-676798 --output=json --user=testUser: (5.635823925s)
--- PASS: TestJSONOutput/stop/Command (5.64s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-410612 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-410612 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.118836ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a163b8d0-7d5f-41d6-ab48-9c99f0a0c220","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-410612] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9c2fa102-46e4-4b5c-8389-a3a14424f43f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18859"}}
	{"specversion":"1.0","id":"8f38906f-8fd7-46d6-a42f-2ea6306a5c46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"3c6c2985-9a6e-43e0-90e6-1e3d61b7dc1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig"}}
	{"specversion":"1.0","id":"2058f89f-0f8f-43b6-8b0a-9ef44bc12ebc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube"}}
	{"specversion":"1.0","id":"f454b84d-9c7a-46c6-bb95-300557b05cf8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"4e3676ff-7432-4f3f-a978-a0c4ef1e84a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"27a75f9f-949a-4b9f-98f8-b2c0522ba30f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-410612" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-410612
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.01s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-989921 --network=
E0703 23:20:29.653339   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:20:51.148888   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-989921 --network=: (34.957067113s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-989921" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-989921
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-989921: (2.033281783s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.01s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-892653 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-892653 --network=bridge: (21.20259518s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-892653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-892653
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-892653: (1.820261861s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.04s)

                                                
                                    
x
+
TestKicExistingNetwork (24.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-298254 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-298254 --network=existing-network: (22.88007308s)
helpers_test.go:175: Cleaning up "existing-network-298254" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-298254
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-298254: (1.89356605s)
--- PASS: TestKicExistingNetwork (24.91s)

                                                
                                    
x
+
TestKicCustomSubnet (26.46s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-568403 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-568403 --subnet=192.168.60.0/24: (24.477692614s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-568403 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-568403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-568403
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-568403: (1.969873386s)
--- PASS: TestKicCustomSubnet (26.46s)

                                                
                                    
x
+
TestKicStaticIP (25.56s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-508205 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-508205 --static-ip=192.168.200.200: (23.415313929s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-508205 ip
helpers_test.go:175: Cleaning up "static-ip-508205" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-508205
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-508205: (2.031559218s)
--- PASS: TestKicStaticIP (25.56s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (44.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-188796 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-188796 --driver=docker  --container-runtime=containerd: (19.206703903s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-191782 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-191782 --driver=docker  --container-runtime=containerd: (20.110939251s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-188796
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-191782
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-191782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-191782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-191782: (1.851725918s)
helpers_test.go:175: Cleaning up "first-188796" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-188796
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-188796: (2.153882277s)
--- PASS: TestMinikubeProfile (44.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-518895 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-518895 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.212168699s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-518895 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-529591 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-529591 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.921725559s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-529591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.23s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-518895 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-518895 --alsologtostderr -v=5: (1.54852226s)
--- PASS: TestMountStart/serial/DeleteFirst (1.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-529591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-529591
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-529591: (1.161318277s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-529591
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-529591: (6.126967391s)
--- PASS: TestMountStart/serial/RestartStopped (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-529591 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (60.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405072 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-405072 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m0.066732824s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (60.48s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-405072 -- rollout status deployment/busybox: (2.665390231s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-65tl9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-kdqqb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-65tl9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-kdqqb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-65tl9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-kdqqb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.91s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-65tl9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-65tl9 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-kdqqb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-405072 -- exec busybox-fc5497c4f-kdqqb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-405072 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-405072 -v 3 --alsologtostderr: (17.006512887s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.57s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-405072 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp testdata/cp-test.txt multinode-405072:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile511890224/001/cp-test_multinode-405072.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072:/home/docker/cp-test.txt multinode-405072-m02:/home/docker/cp-test_multinode-405072_multinode-405072-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m02 "sudo cat /home/docker/cp-test_multinode-405072_multinode-405072-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072:/home/docker/cp-test.txt multinode-405072-m03:/home/docker/cp-test_multinode-405072_multinode-405072-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m03 "sudo cat /home/docker/cp-test_multinode-405072_multinode-405072-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp testdata/cp-test.txt multinode-405072-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile511890224/001/cp-test_multinode-405072-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072-m02:/home/docker/cp-test.txt multinode-405072:/home/docker/cp-test_multinode-405072-m02_multinode-405072.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072 "sudo cat /home/docker/cp-test_multinode-405072-m02_multinode-405072.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072-m02:/home/docker/cp-test.txt multinode-405072-m03:/home/docker/cp-test_multinode-405072-m02_multinode-405072-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m03 "sudo cat /home/docker/cp-test_multinode-405072-m02_multinode-405072-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp testdata/cp-test.txt multinode-405072-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile511890224/001/cp-test_multinode-405072-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072-m03:/home/docker/cp-test.txt multinode-405072:/home/docker/cp-test_multinode-405072-m03_multinode-405072.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m03 "sudo cat /home/docker/cp-test.txt"
E0703 23:25:23.465331   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072 "sudo cat /home/docker/cp-test_multinode-405072-m03_multinode-405072.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 cp multinode-405072-m03:/home/docker/cp-test.txt multinode-405072-m02:/home/docker/cp-test_multinode-405072-m03_multinode-405072-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 ssh -n multinode-405072-m02 "sudo cat /home/docker/cp-test_multinode-405072-m03_multinode-405072-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.55s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-405072 node stop m03: (1.167943781s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-405072 status: exit status 7 (437.13303ms)

                                                
                                                
-- stdout --
	multinode-405072
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-405072-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-405072-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-405072 status --alsologtostderr: exit status 7 (437.440425ms)

                                                
                                                
-- stdout --
	multinode-405072
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-405072-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-405072-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:25:26.424355  173760 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:25:26.424454  173760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:25:26.424462  173760 out.go:304] Setting ErrFile to fd 2...
	I0703 23:25:26.424466  173760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:25:26.424620  173760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:25:26.424814  173760 out.go:298] Setting JSON to false
	I0703 23:25:26.424845  173760 mustload.go:65] Loading cluster: multinode-405072
	I0703 23:25:26.424870  173760 notify.go:220] Checking for updates...
	I0703 23:25:26.426016  173760 config.go:182] Loaded profile config "multinode-405072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 23:25:26.426050  173760 status.go:255] checking status of multinode-405072 ...
	I0703 23:25:26.426729  173760 cli_runner.go:164] Run: docker container inspect multinode-405072 --format={{.State.Status}}
	I0703 23:25:26.443787  173760 status.go:330] multinode-405072 host status = "Running" (err=<nil>)
	I0703 23:25:26.443812  173760 host.go:66] Checking if "multinode-405072" exists ...
	I0703 23:25:26.444039  173760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-405072
	I0703 23:25:26.460102  173760 host.go:66] Checking if "multinode-405072" exists ...
	I0703 23:25:26.460404  173760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:25:26.460459  173760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-405072
	I0703 23:25:26.477016  173760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/multinode-405072/id_rsa Username:docker}
	I0703 23:25:26.565574  173760 ssh_runner.go:195] Run: systemctl --version
	I0703 23:25:26.569382  173760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:25:26.579246  173760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:25:26.627396  173760 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:62 SystemTime:2024-07-03 23:25:26.618165907 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:25:26.627908  173760 kubeconfig.go:125] found "multinode-405072" server: "https://192.168.67.2:8443"
	I0703 23:25:26.627932  173760 api_server.go:166] Checking apiserver status ...
	I0703 23:25:26.627960  173760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:25:26.637885  173760 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1587/cgroup
	I0703 23:25:26.645893  173760 api_server.go:182] apiserver freezer: "2:freezer:/docker/2e013ab72362cc1d0715eb13c3a0e0170e342fee77d834a77317cf13d55a95fc/kubepods/burstable/podc3a3744dcd3a0282eb85de931444d8f1/1c5a12fcb72f72f2d4303d2241bb88541e48096768f0fb6c5fa550bb31556cac"
	I0703 23:25:26.645948  173760 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2e013ab72362cc1d0715eb13c3a0e0170e342fee77d834a77317cf13d55a95fc/kubepods/burstable/podc3a3744dcd3a0282eb85de931444d8f1/1c5a12fcb72f72f2d4303d2241bb88541e48096768f0fb6c5fa550bb31556cac/freezer.state
	I0703 23:25:26.653362  173760 api_server.go:204] freezer state: "THAWED"
	I0703 23:25:26.653396  173760 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0703 23:25:26.656923  173760 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0703 23:25:26.656944  173760 status.go:422] multinode-405072 apiserver status = Running (err=<nil>)
	I0703 23:25:26.656967  173760 status.go:257] multinode-405072 status: &{Name:multinode-405072 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:25:26.656993  173760 status.go:255] checking status of multinode-405072-m02 ...
	I0703 23:25:26.657235  173760 cli_runner.go:164] Run: docker container inspect multinode-405072-m02 --format={{.State.Status}}
	I0703 23:25:26.673779  173760 status.go:330] multinode-405072-m02 host status = "Running" (err=<nil>)
	I0703 23:25:26.673797  173760 host.go:66] Checking if "multinode-405072-m02" exists ...
	I0703 23:25:26.674060  173760 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-405072-m02
	I0703 23:25:26.689903  173760 host.go:66] Checking if "multinode-405072-m02" exists ...
	I0703 23:25:26.690161  173760 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:25:26.690223  173760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-405072-m02
	I0703 23:25:26.706429  173760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/18859-12140/.minikube/machines/multinode-405072-m02/id_rsa Username:docker}
	I0703 23:25:26.793396  173760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:25:26.804584  173760 status.go:257] multinode-405072-m02 status: &{Name:multinode-405072-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:25:26.804626  173760 status.go:255] checking status of multinode-405072-m03 ...
	I0703 23:25:26.804935  173760 cli_runner.go:164] Run: docker container inspect multinode-405072-m03 --format={{.State.Status}}
	I0703 23:25:26.821905  173760 status.go:330] multinode-405072-m03 host status = "Stopped" (err=<nil>)
	I0703 23:25:26.821925  173760 status.go:343] host is not running, skipping remaining checks
	I0703 23:25:26.821936  173760 status.go:257] multinode-405072-m03 status: &{Name:multinode-405072-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 node start m03 -v=7 --alsologtostderr
E0703 23:25:29.652943   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-405072 node start m03 -v=7 --alsologtostderr: (7.705213217s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (81.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-405072
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-405072
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-405072: (24.634350373s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405072 --wait=true -v=8 --alsologtostderr
E0703 23:26:52.699869   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-405072 --wait=true -v=8 --alsologtostderr: (56.955446102s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-405072
--- PASS: TestMultiNode/serial/RestartKeepsNodes (81.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-405072 node delete m03: (4.409819168s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-405072 stop: (23.552808823s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-405072 status: exit status 7 (76.869315ms)

                                                
                                                
-- stdout --
	multinode-405072
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-405072-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-405072 status --alsologtostderr: exit status 7 (75.184628ms)

                                                
                                                
-- stdout --
	multinode-405072
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-405072-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:27:25.429118  183479 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:27:25.429374  183479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:27:25.429382  183479 out.go:304] Setting ErrFile to fd 2...
	I0703 23:27:25.429385  183479 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:27:25.429578  183479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:27:25.429734  183479 out.go:298] Setting JSON to false
	I0703 23:27:25.429760  183479 mustload.go:65] Loading cluster: multinode-405072
	I0703 23:27:25.429848  183479 notify.go:220] Checking for updates...
	I0703 23:27:25.430180  183479 config.go:182] Loaded profile config "multinode-405072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 23:27:25.430197  183479 status.go:255] checking status of multinode-405072 ...
	I0703 23:27:25.430801  183479 cli_runner.go:164] Run: docker container inspect multinode-405072 --format={{.State.Status}}
	I0703 23:27:25.448035  183479 status.go:330] multinode-405072 host status = "Stopped" (err=<nil>)
	I0703 23:27:25.448052  183479 status.go:343] host is not running, skipping remaining checks
	I0703 23:27:25.448058  183479 status.go:257] multinode-405072 status: &{Name:multinode-405072 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:27:25.448079  183479 status.go:255] checking status of multinode-405072-m02 ...
	I0703 23:27:25.448324  183479 cli_runner.go:164] Run: docker container inspect multinode-405072-m02 --format={{.State.Status}}
	I0703 23:27:25.464323  183479 status.go:330] multinode-405072-m02 host status = "Stopped" (err=<nil>)
	I0703 23:27:25.464341  183479 status.go:343] host is not running, skipping remaining checks
	I0703 23:27:25.464346  183479 status.go:257] multinode-405072-m02 status: &{Name:multinode-405072-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405072 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-405072 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.784042902s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-405072 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-405072
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405072-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-405072-m02 --driver=docker  --container-runtime=containerd: exit status 14 (57.251427ms)

                                                
                                                
-- stdout --
	* [multinode-405072-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18859
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-405072-m02' is duplicated with machine name 'multinode-405072-m02' in profile 'multinode-405072'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-405072-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-405072-m03 --driver=docker  --container-runtime=containerd: (19.990533479s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-405072
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-405072: exit status 80 (251.368329ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-405072 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-405072-m03 already exists in multinode-405072-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-405072-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-405072-m03: (2.147383454s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.49s)

                                                
                                    
x
+
TestPreload (133.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-730040 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-730040 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m28.920571238s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-730040 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-730040 image pull gcr.io/k8s-minikube/busybox: (2.131026485s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-730040
E0703 23:30:23.467539   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-730040: (11.845711194s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-730040 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E0703 23:30:29.652791   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-730040 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (27.769557075s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-730040 image list
helpers_test.go:175: Cleaning up "test-preload-730040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-730040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-730040: (2.239512847s)
--- PASS: TestPreload (133.13s)

                                                
                                    
x
+
TestScheduledStopUnix (98.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-125970 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-125970 --memory=2048 --driver=docker  --container-runtime=containerd: (22.463124879s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-125970 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-125970 -n scheduled-stop-125970
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-125970 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-125970 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-125970 -n scheduled-stop-125970
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-125970
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-125970 --schedule 15s
E0703 23:31:46.511257   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-125970
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-125970: exit status 7 (58.371522ms)

                                                
                                                
-- stdout --
	scheduled-stop-125970
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-125970 -n scheduled-stop-125970
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-125970 -n scheduled-stop-125970: exit status 7 (60.392973ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-125970" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-125970
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-125970: (4.426475249s)
--- PASS: TestScheduledStopUnix (98.17s)

                                                
                                    
x
+
TestInsufficientStorage (9.2s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-403596 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-403596 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.914852945s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5967c137-7c5b-4759-af1d-613cfc5ffaa9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-403596] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"25624f62-289a-42cc-b864-d2f0d03a195b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18859"}}
	{"specversion":"1.0","id":"1470abe4-fb54-432d-9ef0-d6ccdeac38f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"08c38d8d-a95a-4839-bb90-4af15a99eb71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig"}}
	{"specversion":"1.0","id":"a932e5d0-a45b-4453-aa81-792799a74ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube"}}
	{"specversion":"1.0","id":"0f09c66d-2e50-434a-94b5-f45fa552b3ee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"57cf8175-3a21-4649-a0a6-b0afd178ad41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4798bbd4-87d7-454c-90b3-e8b8d9bc45dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"eaea9f38-66dd-42c1-89d1-c21e2020423e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1bfb1f4c-cf52-4cf6-9dcd-68c3a48f8dfc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0c170930-65a0-452c-af0d-7b4191086571","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"f437dfdb-746d-428b-9dd6-a6d52faff7ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-403596\" primary control-plane node in \"insufficient-storage-403596\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"c3450a8b-8229-41e5-a3e3-f6b0505857f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1719972989-19184 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e67da11-2687-4b76-9a32-6c345fac77ab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"2570d073-b07d-47e6-9f04-49ae7fea2d37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-403596 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-403596 --output=json --layout=cluster: exit status 7 (241.652294ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-403596","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-403596","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 23:32:39.488176  206423 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-403596" does not appear in /home/jenkins/minikube-integration/18859-12140/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-403596 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-403596 --output=json --layout=cluster: exit status 7 (238.559923ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-403596","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-403596","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0703 23:32:39.728013  206524 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-403596" does not appear in /home/jenkins/minikube-integration/18859-12140/kubeconfig
	E0703 23:32:39.736920  206524 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/insufficient-storage-403596/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-403596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-403596
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-403596: (1.805479441s)
--- PASS: TestInsufficientStorage (9.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (59.68s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3652621958 start -p running-upgrade-786560 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3652621958 start -p running-upgrade-786560 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (31.963700628s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-786560 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0703 23:35:23.465000   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:35:29.653390   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-786560 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (23.366358248s)
helpers_test.go:175: Cleaning up "running-upgrade-786560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-786560
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-786560: (1.906132728s)
--- PASS: TestRunningBinaryUpgrade (59.68s)

                                                
                                    
x
+
TestKubernetesUpgrade (318.91s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-579570 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-579570 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.264237888s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-579570
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-579570: (1.264580303s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-579570 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-579570 status --format={{.Host}}: exit status 7 (73.060578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-579570 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-579570 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m30.238513844s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-579570 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-579570 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-579570 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (67.666437ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-579570] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18859
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-579570
	    minikube start -p kubernetes-upgrade-579570 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5795702 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-579570 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-579570 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-579570 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4.616494546s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-579570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-579570
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-579570: (2.312610754s)
--- PASS: TestKubernetesUpgrade (318.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059442 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-059442 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (67.553447ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-059442] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18859
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059442 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-059442 --driver=docker  --container-runtime=containerd: (27.999054443s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-059442 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (138.27s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3832451161 start -p stopped-upgrade-087699 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3832451161 start -p stopped-upgrade-087699 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m43.530273902s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3832451161 -p stopped-upgrade-087699 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3832451161 -p stopped-upgrade-087699 stop: (3.451803067s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-087699 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-087699 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.291549534s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (138.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (22.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059442 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-059442 --no-kubernetes --driver=docker  --container-runtime=containerd: (20.605094956s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-059442 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-059442 status -o json: exit status 2 (270.271316ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-059442","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-059442
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-059442: (1.853603057s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (22.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059442 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-059442 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.53466175s)
--- PASS: TestNoKubernetes/serial/Start (7.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-059442 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-059442 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.265691ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (3.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-059442
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-059442: (3.193555831s)
--- PASS: TestNoKubernetes/serial/Stop (3.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-059442 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-059442 --driver=docker  --container-runtime=containerd: (7.144763923s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-059442 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-059442 "sudo systemctl is-active --quiet service kubelet": exit status 1 (257.403192ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (2.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-562730 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-562730 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (174.311677ms)

                                                
                                                
-- stdout --
	* [false-562730] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18859
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:34:31.656005  233510 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:34:31.656101  233510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:34:31.656109  233510 out.go:304] Setting ErrFile to fd 2...
	I0703 23:34:31.656112  233510 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:34:31.656270  233510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18859-12140/.minikube/bin
	I0703 23:34:31.656867  233510 out.go:298] Setting JSON to false
	I0703 23:34:31.657842  233510 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-8","uptime":4614,"bootTime":1720045058,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 23:34:31.657940  233510 start.go:139] virtualization: kvm guest
	I0703 23:34:31.659931  233510 out.go:177] * [false-562730] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 23:34:31.661520  233510 notify.go:220] Checking for updates...
	I0703 23:34:31.663726  233510 out.go:177]   - MINIKUBE_LOCATION=18859
	I0703 23:34:31.665065  233510 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 23:34:31.672412  233510 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18859-12140/kubeconfig
	I0703 23:34:31.673931  233510 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18859-12140/.minikube
	I0703 23:34:31.675069  233510 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 23:34:31.676287  233510 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 23:34:31.678196  233510 config.go:182] Loaded profile config "kubernetes-upgrade-579570": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.2
	I0703 23:34:31.678342  233510 config.go:182] Loaded profile config "missing-upgrade-167387": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0703 23:34:31.678477  233510 config.go:182] Loaded profile config "stopped-upgrade-087699": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.24.1
	I0703 23:34:31.678597  233510 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 23:34:31.711442  233510 docker.go:122] docker version: linux-27.0.3:Docker Engine - Community
	I0703 23:34:31.711552  233510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0703 23:34:31.775344  233510 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-07-03 23:34:31.761162819 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1062-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647947776 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-8 Labels:[] ExperimentalBuild:false ServerVersion:27.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e Expected:ae71819c4f5e67bb4d5ae76a6b735f29cc25774e} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.15.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.28.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0703 23:34:31.775548  233510 docker.go:295] overlay module found
	I0703 23:34:31.777519  233510 out.go:177] * Using the docker driver based on user configuration
	I0703 23:34:31.778730  233510 start.go:297] selected driver: docker
	I0703 23:34:31.778753  233510 start.go:901] validating driver "docker" against <nil>
	I0703 23:34:31.778768  233510 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 23:34:31.780997  233510 out.go:177] 
	W0703 23:34:31.782111  233510 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0703 23:34:31.783125  233510 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-562730 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-562730" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:30 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-579570
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:14 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-167387
contexts:
- context:
cluster: kubernetes-upgrade-579570
user: kubernetes-upgrade-579570
name: kubernetes-upgrade-579570
- context:
cluster: missing-upgrade-167387
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:14 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-167387
name: missing-upgrade-167387
current-context: kubernetes-upgrade-579570
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-579570
user:
client-certificate: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kubernetes-upgrade-579570/client.crt
client-key: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kubernetes-upgrade-579570/client.key
- name: missing-upgrade-167387
user:
client-certificate: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/missing-upgrade-167387/client.crt
client-key: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/missing-upgrade-167387/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-562730

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-562730"

                                                
                                                
----------------------- debugLogs end: false-562730 [took: 2.57369742s] --------------------------------
helpers_test.go:175: Cleaning up "false-562730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-562730
--- PASS: TestNetworkPlugins/group/false (2.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-087699
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.75s)

                                                
                                    
x
+
TestPause/serial/Start (47.57s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-062973 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-062973 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (47.570160106s)
--- PASS: TestPause/serial/Start (47.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.47s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-062973 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-062973 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.45060456s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.47s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-062973 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-062973 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-062973 --output=json --layout=cluster: exit status 2 (269.637523ms)

                                                
                                                
-- stdout --
	{"Name":"pause-062973","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-062973","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.55s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-062973 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.55s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.67s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-062973 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.67s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.44s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-062973 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-062973 --alsologtostderr -v=5: (2.441821342s)
--- PASS: TestPause/serial/DeletePaused (2.44s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (3.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.529212279s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-062973
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-062973: exit status 1 (16.052743ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-062973: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (46.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (46.711385033s)
--- PASS: TestNetworkPlugins/group/auto/Start (46.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-562730 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-562730 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-p4psc" [6db6c346-72f2-46c3-95d7-e2d3db9a6846] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-p4psc" [6db6c346-72f2-46c3-95d7-e2d3db9a6846] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003780192s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-562730 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (51.68803105s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (57.591495154s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (50.287853541s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-plj6c" [ed99b786-548e-4062-8420-ad0fdfb4c25d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004356408s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-562730 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-562730 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nm8nx" [f2575801-01a4-4d5a-b369-0cec73156721] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nm8nx" [f2575801-01a4-4d5a-b369-0cec73156721] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003971336s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-6c8bp" [7902e6a7-5d51-4642-89e3-fb1d8ba52841] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005702144s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-562730 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-562730 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-562730 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-562730 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-s2gsz" [7e9e7ba6-f14c-43f6-baf5-e568ec55e9b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-s2gsz" [7e9e7ba6-f14c-43f6-baf5-e568ec55e9b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004050591s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-562730 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-z4k7s" [a8562300-2706-46cf-a7fc-e3f211d0e575] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-z4k7s" [a8562300-2706-46cf-a7fc-e3f211d0e575] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003834625s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-562730 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-562730 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m20.055003095s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (58.001586872s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (38.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0703 23:40:23.465356   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:40:29.652562   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-562730 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (38.975288465s)
--- PASS: TestNetworkPlugins/group/bridge/Start (38.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-562730 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-562730 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-s9dgs" [ed8c7578-8c29-4f78-a3a5-e7e80e140220] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-s9dgs" [ed8c7578-8c29-4f78-a3a5-e7e80e140220] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.003489588s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-562730 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-w4l8n" [01c0c360-5adb-45d3-b2b0-61e3c4fe511c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003538927s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-562730 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-562730 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-5v9zj" [543c97b2-128d-4d06-916f-3f9c987c60f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-5v9zj" [543c97b2-128d-4d06-916f-3f9c987c60f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003528904s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (137.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-581602 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-581602 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m17.525207299s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (137.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-562730 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-562730 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zgskm" [2e0e68df-5435-4373-9a95-5915b58a68ba] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zgskm" [2e0e68df-5435-4373-9a95-5915b58a68ba] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003553225s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-562730 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-562730 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-562730 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)
E0703 23:48:58.127507   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-494892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-494892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (1m3.072475341s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-291471 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-291471 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (51.856613727s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-291471 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f4b9e334-7563-4f35-ac85-7ec4cb5e9eff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f4b9e334-7563-4f35-ac85-7ec4cb5e9eff] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003340959s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-291471 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-494892 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [324f9f66-e51f-45b9-acb8-e9293a274add] Pending
helpers_test.go:344: "busybox" [324f9f66-e51f-45b9-acb8-e9293a274add] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [324f9f66-e51f-45b9-acb8-e9293a274add] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.002870937s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-494892 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-291471 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-291471 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-291471 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-291471 --alsologtostderr -v=3: (11.87199771s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-494892 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-494892 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-494892 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-494892 --alsologtostderr -v=3: (11.941371522s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291471 -n embed-certs-291471
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291471 -n embed-certs-291471: exit status 7 (61.69164ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-291471 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-291471 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 23:43:14.910817   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:14.916123   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:14.926376   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:14.947479   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:14.987772   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:15.068357   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:15.229051   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:15.549616   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:16.190773   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:17.471940   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-291471 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (4m22.202064804s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-291471 -n embed-certs-291471
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-494892 -n no-preload-494892
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-494892 -n no-preload-494892: exit status 7 (81.866265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-494892 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-494892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 23:43:20.032699   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:25.153864   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:43:32.700610   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:43:35.395034   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-494892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (4m22.073828321s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-494892 -n no-preload-494892
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-581602 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [277ba625-c1f9-49ac-8994-4642a0af70c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [277ba625-c1f9-49ac-8994-4642a0af70c5] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004757523s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-581602 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-581602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-581602 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075894815s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-581602 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-581602 --alsologtostderr -v=3
E0703 23:43:55.875482   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-581602 --alsologtostderr -v=3: (12.082803981s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-581602 -n old-k8s-version-581602
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-581602 -n old-k8s-version-581602: exit status 7 (78.760334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-581602 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (311.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-581602 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0703 23:44:30.055437   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:30.060719   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:30.071047   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:30.091300   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:30.131652   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:30.212003   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:30.372398   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:30.692851   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:31.333314   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:32.613925   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:35.174803   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:36.836078   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:44:40.294971   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:40.922757   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:40.927994   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:40.938224   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:40.958467   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:40.998719   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:41.079023   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:41.240073   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:41.560635   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:42.201212   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:43.482125   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:46.043131   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:47.480060   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:47.485298   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:47.495514   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:47.515775   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:47.556031   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:47.636334   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:47.796714   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:48.117351   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:48.758221   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:50.039080   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:50.536083   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:44:51.164176   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:44:52.599780   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:44:57.720570   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:45:01.404801   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:45:07.960900   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:45:11.017195   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:45:21.885972   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:45:23.465267   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:45:28.441617   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:45:29.653191   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:45:51.977939   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:45:55.954850   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:55.960083   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:55.970422   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:55.990692   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:56.030946   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:56.111256   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:56.271758   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:56.592707   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:57.233643   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:58.514531   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:45:58.756917   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:46:01.075539   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:46:02.846113   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:46:06.195770   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:46:09.402097   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:46:14.284972   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:14.290217   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:14.300447   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:14.320742   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:14.361052   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:14.441365   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:14.601753   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:14.921918   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:15.562186   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:16.436282   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:46:16.842676   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:19.403517   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:24.524529   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:25.963855   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:25.969083   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:25.979304   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:25.999531   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:26.039795   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:26.120057   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:26.280437   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:26.600995   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:27.241958   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:28.522925   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:31.084112   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:34.765151   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:46:36.204467   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:36.917305   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:46:46.445579   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:46:55.245589   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
E0703 23:47:06.925783   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
E0703 23:47:13.898112   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:47:17.877942   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
E0703 23:47:24.766736   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:47:31.322478   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-581602 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (5m11.021678992s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-581602 -n old-k8s-version-581602
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (311.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-xqjwc" [d2e76038-789c-4268-b965-88bccff83274] Running
E0703 23:47:36.206369   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/flannel-562730/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003270734s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-xqjwc" [d2e76038-789c-4268-b965-88bccff83274] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003630552s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-291471 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fxjdd" [2f8d7a72-6a0f-41cc-affa-e82aec5fc8d9] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004091241s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-291471 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-291471 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291471 -n embed-certs-291471
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291471 -n embed-certs-291471: exit status 2 (277.665372ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-291471 -n embed-certs-291471
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-291471 -n embed-certs-291471: exit status 2 (280.459575ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-291471 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-291471 -n embed-certs-291471
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-291471 -n embed-certs-291471
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fxjdd" [2f8d7a72-6a0f-41cc-affa-e82aec5fc8d9] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0037223s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-494892 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-594319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-594319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (51.34956332s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-494892 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-494892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-494892 -n no-preload-494892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-494892 -n no-preload-494892: exit status 2 (283.379192ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-494892 -n no-preload-494892
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-494892 -n no-preload-494892: exit status 2 (317.819434ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-494892 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-494892 -n no-preload-494892
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-494892 -n no-preload-494892
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-427650 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 23:48:14.910786   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
E0703 23:48:26.512078   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-427650 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (36.240199866s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-427650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-427650 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-427650 --alsologtostderr -v=3: (1.184682043s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-427650 -n newest-cni-427650
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-427650 -n newest-cni-427650: exit status 7 (60.797113ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-427650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-427650 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 23:48:39.798175   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-427650 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (13.078095101s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-427650 -n newest-cni-427650
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-594319 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ba4ec228-87ee-4c4d-b29c-30db4f92c257] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0703 23:48:42.598143   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/auto-562730/client.crt: no such file or directory
helpers_test.go:344: "busybox" [ba4ec228-87ee-4c4d-b29c-30db4f92c257] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003066987s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-594319 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-427650 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-427650 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-427650 -n newest-cni-427650
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-427650 -n newest-cni-427650: exit status 2 (313.569466ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-427650 -n newest-cni-427650
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-427650 -n newest-cni-427650: exit status 2 (306.312758ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-427650 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-427650 -n newest-cni-427650
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-427650 -n newest-cni-427650
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-594319 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-594319 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-594319 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-594319 --alsologtostderr -v=3: (11.946989465s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319: exit status 7 (66.990316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-594319 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-594319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2
E0703 23:49:09.806274   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/enable-default-cni-562730/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-594319 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.2: (4m22.002846746s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kqcxf" [0b5c842b-5aa7-44f4-95bd-97ddff5faf2c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004318919s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-kqcxf" [0b5c842b-5aa7-44f4-95bd-97ddff5faf2c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004266699s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-581602 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-581602 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-581602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-581602 -n old-k8s-version-581602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-581602 -n old-k8s-version-581602: exit status 2 (271.76058ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-581602 -n old-k8s-version-581602
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-581602 -n old-k8s-version-581602: exit status 2 (271.566311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-581602 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-581602 -n old-k8s-version-581602
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-581602 -n old-k8s-version-581602
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7pspq" [1a0b99fc-4915-49d0-87a1-3d83c7d88461] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003534119s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7pspq" [1a0b99fc-4915-49d0-87a1-3d83c7d88461] Running
E0703 23:53:36.148173   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/no-preload-494892/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003901917s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-594319 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-594319 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240513-cd2ac642
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-594319 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319: exit status 2 (265.996694ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319: exit status 2 (263.124054ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-594319 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319
E0703 23:53:39.289572   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-594319 -n default-k8s-diff-port-594319
E0703 23:53:39.294708   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:53:39.305682   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:53:39.325914   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:53:39.366193   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:53:39.446521   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.47s)
E0703 23:53:44.408079   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:53:49.528475   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:53:59.769516   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:54:17.109331   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/no-preload-494892/client.crt: no such file or directory
E0703 23:54:20.250024   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:54:30.056093   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kindnet-562730/client.crt: no such file or directory
E0703 23:54:40.921938   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/calico-562730/client.crt: no such file or directory
E0703 23:54:47.480461   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/custom-flannel-562730/client.crt: no such file or directory
E0703 23:55:01.210706   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/old-k8s-version-581602/client.crt: no such file or directory
E0703 23:55:23.465533   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/functional-444884/client.crt: no such file or directory
E0703 23:55:29.652868   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/addons-095645/client.crt: no such file or directory
E0703 23:55:39.029939   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/no-preload-494892/client.crt: no such file or directory
E0703 23:55:55.954399   18931 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/bridge-562730/client.crt: no such file or directory

                                                
                                    

Test skip (23/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-562730 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-562730" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:30 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-579570
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:14 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-167387
contexts:
- context:
cluster: kubernetes-upgrade-579570
user: kubernetes-upgrade-579570
name: kubernetes-upgrade-579570
- context:
cluster: missing-upgrade-167387
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:14 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-167387
name: missing-upgrade-167387
current-context: kubernetes-upgrade-579570
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-579570
user:
client-certificate: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kubernetes-upgrade-579570/client.crt
client-key: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kubernetes-upgrade-579570/client.key
- name: missing-upgrade-167387
user:
client-certificate: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/missing-upgrade-167387/client.crt
client-key: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/missing-upgrade-167387/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-562730

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-562730"

                                                
                                                
----------------------- debugLogs end: kubenet-562730 [took: 4.855078988s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-562730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-562730
--- SKIP: TestNetworkPlugins/group/kubenet (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-562730 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-562730" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:30 UTC
provider: minikube.sigs.k8s.io
version: v1.33.1
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-579570
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18859-12140/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:14 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-167387
contexts:
- context:
cluster: kubernetes-upgrade-579570
user: kubernetes-upgrade-579570
name: kubernetes-upgrade-579570
- context:
cluster: missing-upgrade-167387
extensions:
- extension:
last-update: Wed, 03 Jul 2024 23:34:14 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-167387
name: missing-upgrade-167387
current-context: kubernetes-upgrade-579570
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-579570
user:
client-certificate: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kubernetes-upgrade-579570/client.crt
client-key: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/kubernetes-upgrade-579570/client.key
- name: missing-upgrade-167387
user:
client-certificate: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/missing-upgrade-167387/client.crt
client-key: /home/jenkins/minikube-integration/18859-12140/.minikube/profiles/missing-upgrade-167387/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-562730

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-562730" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-562730"

                                                
                                                
----------------------- debugLogs end: cilium-562730 [took: 2.845628892s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-562730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-562730
--- SKIP: TestNetworkPlugins/group/cilium (2.98s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-533407" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-533407
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard