Test Report: KVM_Linux 17339

                    
                      0bd9d646542d61029f9b8266606b7c3eba162004:2023-10-02:31263
                    
                

Test fail (3/313)

Order failed test Duration
150 TestImageBuild/serial/Setup 25.77
227 TestKubernetesUpgrade 99.09
366 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 2.11
x
+
TestImageBuild/serial/Setup (25.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-277236 --driver=kvm2 
image_test.go:69: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p image-277236 --driver=kvm2 : exit status 90 (25.549032853s)

                                                
                                                
-- stdout --
	* [image-277236] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node image-277236 in cluster image-277236
	* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
image_test.go:70: failed to start minikube with args: "out/minikube-linux-amd64 start -p image-277236 --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p image-277236 -n image-277236
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p image-277236 -n image-277236: exit status 6 (224.773433ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 18:33:43.030254  142128 status.go:415] kubeconfig endpoint: extract IP: "image-277236" does not appear in /home/jenkins/minikube-integration/17339-126802/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "image-277236" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestImageBuild/serial/Setup (25.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (99.09s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-440505 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-440505 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m8.967169692s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-440505
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-440505: (3.302227155s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-440505 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-440505 status --format={{.Host}}: exit status 7 (65.675172ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-440505 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 
E1002 19:03:19.612708  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:03:19.617980  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:03:19.628323  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:03:19.648656  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:03:19.689021  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:03:19.769450  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:03:19.929928  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-440505 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2 : exit status 90 (25.267165015s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-440505] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node kubernetes-upgrade-440505 in cluster kubernetes-upgrade-440505
	* Restarting existing kvm2 VM for "kubernetes-upgrade-440505" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 19:02:56.562734  157637 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:02:56.562978  157637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:02:56.562989  157637 out.go:309] Setting ErrFile to fd 2...
	I1002 19:02:56.562993  157637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:02:56.563295  157637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
	I1002 19:02:56.563917  157637 out.go:303] Setting JSON to false
	I1002 19:02:56.565043  157637 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6323,"bootTime":1696267054,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:02:56.565112  157637 start.go:138] virtualization: kvm guest
	I1002 19:02:56.567539  157637 out.go:177] * [kubernetes-upgrade-440505] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 19:02:56.569517  157637 out.go:177]   - MINIKUBE_LOCATION=17339
	I1002 19:02:56.569600  157637 notify.go:220] Checking for updates...
	I1002 19:02:56.570936  157637 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:02:56.572391  157637 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 19:02:56.573813  157637 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	I1002 19:02:56.575221  157637 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:02:56.576778  157637 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:02:56.578802  157637 config.go:182] Loaded profile config "kubernetes-upgrade-440505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1002 19:02:56.579412  157637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:02:56.579496  157637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:02:56.595739  157637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I1002 19:02:56.596199  157637 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:02:56.596795  157637 main.go:141] libmachine: Using API Version  1
	I1002 19:02:56.596812  157637 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:02:56.597230  157637 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:02:56.597458  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:02:56.597731  157637 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 19:02:56.598077  157637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:02:56.598118  157637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:02:56.615004  157637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38595
	I1002 19:02:56.615510  157637 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:02:56.616035  157637 main.go:141] libmachine: Using API Version  1
	I1002 19:02:56.616067  157637 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:02:56.616528  157637 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:02:56.616743  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:02:56.656962  157637 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 19:02:56.658323  157637 start.go:298] selected driver: kvm2
	I1002 19:02:56.658343  157637 start.go:902] validating driver "kvm2" against &{Name:kubernetes-upgrade-440505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kubernet
esConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-440505 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.201 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:02:56.658447  157637 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:02:56.659066  157637 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:02:56.659135  157637 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17339-126802/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:02:56.675356  157637 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 19:02:56.675725  157637 cni.go:84] Creating CNI manager for ""
	I1002 19:02:56.675752  157637 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:02:56.675766  157637 start_flags.go:321] config:
	{Name:kubernetes-upgrade-440505 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:kubernetes-upgrade-440505 Name
space:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.83.201 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:02:56.675956  157637 iso.go:125] acquiring lock: {Name:mkf7650ebae79a7eed75eeedd5ceff434d4c4f84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:02:56.677893  157637 out.go:177] * Starting control plane node kubernetes-upgrade-440505 in cluster kubernetes-upgrade-440505
	I1002 19:02:56.679526  157637 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 19:02:56.679580  157637 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 19:02:56.679590  157637 cache.go:57] Caching tarball of preloaded images
	I1002 19:02:56.679684  157637 preload.go:174] Found /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 19:02:56.679696  157637 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 19:02:56.679829  157637 profile.go:148] Saving config to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubernetes-upgrade-440505/config.json ...
	I1002 19:02:56.680014  157637 start.go:365] acquiring machines lock for kubernetes-upgrade-440505: {Name:mk379ca3a60ac28865ffc8c5642c99eadc78dc32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 19:02:56.680060  157637 start.go:369] acquired machines lock for "kubernetes-upgrade-440505" in 26.626µs
	I1002 19:02:56.680074  157637 start.go:96] Skipping create...Using existing machine configuration
	I1002 19:02:56.680082  157637 fix.go:54] fixHost starting: 
	I1002 19:02:56.680333  157637 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:02:56.680366  157637 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:02:56.697078  157637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34159
	I1002 19:02:56.697648  157637 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:02:56.698191  157637 main.go:141] libmachine: Using API Version  1
	I1002 19:02:56.698220  157637 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:02:56.698597  157637 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:02:56.698837  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:02:56.699021  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetState
	I1002 19:02:56.701338  157637 fix.go:102] recreateIfNeeded on kubernetes-upgrade-440505: state=Stopped err=<nil>
	I1002 19:02:56.701436  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	W1002 19:02:56.701653  157637 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 19:02:56.703974  157637 out.go:177] * Restarting existing kvm2 VM for "kubernetes-upgrade-440505" ...
	I1002 19:02:56.705525  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .Start
	I1002 19:02:56.705780  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Ensuring networks are active...
	I1002 19:02:56.706846  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Ensuring network default is active
	I1002 19:02:56.707250  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Ensuring network mk-kubernetes-upgrade-440505 is active
	I1002 19:02:56.707711  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Getting domain xml...
	I1002 19:02:56.708698  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Creating domain...
	I1002 19:02:58.104432  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Waiting to get IP...
	I1002 19:02:58.107575  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:02:58.108137  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:02:58.108235  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:02:58.108120  157672 retry.go:31] will retry after 192.07997ms: waiting for machine to come up
	I1002 19:02:58.301951  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:02:58.302459  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:02:58.302495  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:02:58.302383  157672 retry.go:31] will retry after 249.664763ms: waiting for machine to come up
	I1002 19:02:58.553696  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:02:58.554248  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:02:58.554283  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:02:58.554189  157672 retry.go:31] will retry after 414.971865ms: waiting for machine to come up
	I1002 19:02:58.970841  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:02:58.971379  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:02:58.971412  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:02:58.971311  157672 retry.go:31] will retry after 580.965368ms: waiting for machine to come up
	I1002 19:02:59.553851  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:02:59.554337  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:02:59.554365  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:02:59.554276  157672 retry.go:31] will retry after 667.901128ms: waiting for machine to come up
	I1002 19:03:00.223444  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:00.223830  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:03:00.223858  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:03:00.223790  157672 retry.go:31] will retry after 669.809435ms: waiting for machine to come up
	I1002 19:03:00.895692  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:00.896175  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:03:00.896196  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:03:00.896122  157672 retry.go:31] will retry after 1.11170317s: waiting for machine to come up
	I1002 19:03:02.009421  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:02.009977  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:03:02.010038  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:03:02.009940  157672 retry.go:31] will retry after 1.429644448s: waiting for machine to come up
	I1002 19:03:03.441674  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:03.442204  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:03:03.442239  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:03:03.442163  157672 retry.go:31] will retry after 1.422272983s: waiting for machine to come up
	I1002 19:03:04.866609  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:04.867068  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:03:04.867097  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:03:04.867004  157672 retry.go:31] will retry after 1.690730741s: waiting for machine to come up
	I1002 19:03:06.559652  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:06.560206  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:03:06.560240  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:03:06.560137  157672 retry.go:31] will retry after 2.900114286s: waiting for machine to come up
	I1002 19:03:09.462427  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:09.462934  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:03:09.462964  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:03:09.462895  157672 retry.go:31] will retry after 3.446946434s: waiting for machine to come up
	I1002 19:03:12.911757  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:12.912304  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | unable to find current IP address of domain kubernetes-upgrade-440505 in network mk-kubernetes-upgrade-440505
	I1002 19:03:12.912339  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | I1002 19:03:12.912279  157672 retry.go:31] will retry after 3.163273704s: waiting for machine to come up
	I1002 19:03:16.079114  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.079671  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has current primary IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.079705  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Found IP for machine: 192.168.83.201
	I1002 19:03:16.079729  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Reserving static IP address...
	I1002 19:03:16.080295  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-440505", mac: "52:54:00:58:26:e9", ip: "192.168.83.201"} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.080329  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | skip adding static IP to network mk-kubernetes-upgrade-440505 - found existing host DHCP lease matching {name: "kubernetes-upgrade-440505", mac: "52:54:00:58:26:e9", ip: "192.168.83.201"}
	I1002 19:03:16.080350  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | Getting to WaitForSSH function...
	I1002 19:03:16.080367  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Reserved static IP address: 192.168.83.201
	I1002 19:03:16.080378  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Waiting for SSH to be available...
	I1002 19:03:16.083367  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.083724  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.083773  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.084247  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | Using SSH client type: external
	I1002 19:03:16.084273  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | Using SSH private key: /home/jenkins/minikube-integration/17339-126802/.minikube/machines/kubernetes-upgrade-440505/id_rsa (-rw-------)
	I1002 19:03:16.084308  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.83.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17339-126802/.minikube/machines/kubernetes-upgrade-440505/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 19:03:16.084328  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | About to run SSH command:
	I1002 19:03:16.084347  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | exit 0
	I1002 19:03:16.177905  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | SSH cmd err, output: <nil>: 
	I1002 19:03:16.178352  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetConfigRaw
	I1002 19:03:16.179116  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetIP
	I1002 19:03:16.182310  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.182727  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.182774  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.183082  157637 profile.go:148] Saving config to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubernetes-upgrade-440505/config.json ...
	I1002 19:03:16.183354  157637 machine.go:88] provisioning docker machine ...
	I1002 19:03:16.183383  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:03:16.183669  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetMachineName
	I1002 19:03:16.183871  157637 buildroot.go:166] provisioning hostname "kubernetes-upgrade-440505"
	I1002 19:03:16.183895  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetMachineName
	I1002 19:03:16.184057  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:16.187122  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.187574  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.187605  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.187836  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:16.188047  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.188228  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.188383  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:16.188566  157637 main.go:141] libmachine: Using SSH client type: native
	I1002 19:03:16.188969  157637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.201 22 <nil> <nil>}
	I1002 19:03:16.188987  157637 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-440505 && echo "kubernetes-upgrade-440505" | sudo tee /etc/hostname
	I1002 19:03:16.330072  157637 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-440505
	
	I1002 19:03:16.330105  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:16.333509  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.333973  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.334015  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.334202  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:16.334476  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.334752  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.334941  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:16.335176  157637 main.go:141] libmachine: Using SSH client type: native
	I1002 19:03:16.335697  157637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.201 22 <nil> <nil>}
	I1002 19:03:16.335731  157637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-440505' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-440505/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-440505' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:03:16.468877  157637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:03:16.468910  157637 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17339-126802/.minikube CaCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17339-126802/.minikube}
	I1002 19:03:16.468977  157637 buildroot.go:174] setting up certificates
	I1002 19:03:16.468995  157637 provision.go:83] configureAuth start
	I1002 19:03:16.469010  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetMachineName
	I1002 19:03:16.469334  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetIP
	I1002 19:03:16.472805  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.473189  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.473229  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.473397  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:16.476238  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.476584  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.476636  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.476801  157637 provision.go:138] copyHostCerts
	I1002 19:03:16.476873  157637 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem, removing ...
	I1002 19:03:16.476888  157637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem
	I1002 19:03:16.476967  157637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem (1679 bytes)
	I1002 19:03:16.477075  157637 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem, removing ...
	I1002 19:03:16.477086  157637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem
	I1002 19:03:16.477123  157637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem (1082 bytes)
	I1002 19:03:16.477194  157637 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem, removing ...
	I1002 19:03:16.477204  157637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem
	I1002 19:03:16.477236  157637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem (1123 bytes)
	I1002 19:03:16.477298  157637 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-440505 san=[192.168.83.201 192.168.83.201 localhost 127.0.0.1 minikube kubernetes-upgrade-440505]
	I1002 19:03:16.678912  157637 provision.go:172] copyRemoteCerts
	I1002 19:03:16.678987  157637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:03:16.679031  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:16.681836  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.682218  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.682253  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.682415  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:16.682640  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.682793  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:16.682960  157637 sshutil.go:53] new ssh client: &{IP:192.168.83.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/kubernetes-upgrade-440505/id_rsa Username:docker}
	I1002 19:03:16.772401  157637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 19:03:16.795498  157637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1002 19:03:16.820824  157637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1002 19:03:16.843084  157637 provision.go:86] duration metric: configureAuth took 374.073646ms
	I1002 19:03:16.843112  157637 buildroot.go:189] setting minikube options for container-runtime
	I1002 19:03:16.843279  157637 config.go:182] Loaded profile config "kubernetes-upgrade-440505": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:03:16.843305  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:03:16.843619  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:16.846602  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.846942  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.846988  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.847186  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:16.847423  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.847623  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.847835  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:16.848034  157637 main.go:141] libmachine: Using SSH client type: native
	I1002 19:03:16.848448  157637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.201 22 <nil> <nil>}
	I1002 19:03:16.848470  157637 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 19:03:16.967301  157637 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 19:03:16.967330  157637 buildroot.go:70] root file system type: tmpfs
	I1002 19:03:16.967460  157637 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 19:03:16.967486  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:16.970341  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.970728  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:16.970771  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:16.970941  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:16.971218  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.971429  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:16.971596  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:16.971817  157637 main.go:141] libmachine: Using SSH client type: native
	I1002 19:03:16.972127  157637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.201 22 <nil> <nil>}
	I1002 19:03:16.972199  157637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 19:03:17.108481  157637 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 19:03:17.108518  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:17.112227  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:17.112793  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:17.112832  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:17.113020  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:17.113280  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:17.113557  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:17.113744  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:17.113957  157637 main.go:141] libmachine: Using SSH client type: native
	I1002 19:03:17.114305  157637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.201 22 <nil> <nil>}
	I1002 19:03:17.114334  157637 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 19:03:18.175781  157637 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 19:03:18.175811  157637 machine.go:91] provisioned docker machine in 1.992440143s
	I1002 19:03:18.175825  157637 start.go:300] post-start starting for "kubernetes-upgrade-440505" (driver="kvm2")
	I1002 19:03:18.175839  157637 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:03:18.175863  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:03:18.176264  157637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:03:18.176325  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:18.179903  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.180343  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:18.180383  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.180567  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:18.180816  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:18.181036  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:18.181308  157637 sshutil.go:53] new ssh client: &{IP:192.168.83.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/kubernetes-upgrade-440505/id_rsa Username:docker}
	I1002 19:03:18.276626  157637 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:03:18.281483  157637 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 19:03:18.281521  157637 filesync.go:126] Scanning /home/jenkins/minikube-integration/17339-126802/.minikube/addons for local assets ...
	I1002 19:03:18.281608  157637 filesync.go:126] Scanning /home/jenkins/minikube-integration/17339-126802/.minikube/files for local assets ...
	I1002 19:03:18.281712  157637 filesync.go:149] local asset: /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem -> 1340252.pem in /etc/ssl/certs
	I1002 19:03:18.281840  157637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 19:03:18.291401  157637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem --> /etc/ssl/certs/1340252.pem (1708 bytes)
	I1002 19:03:18.318393  157637 start.go:303] post-start completed in 142.547054ms
	I1002 19:03:18.318422  157637 fix.go:56] fixHost completed within 21.638339462s
	I1002 19:03:18.318450  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:18.321496  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.321892  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:18.321919  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.322187  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:18.322416  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:18.322564  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:18.322731  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:18.322955  157637 main.go:141] libmachine: Using SSH client type: native
	I1002 19:03:18.323356  157637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.83.201 22 <nil> <nil>}
	I1002 19:03:18.323374  157637 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1002 19:03:18.447530  157637 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696273398.426711737
	
	I1002 19:03:18.447558  157637 fix.go:206] guest clock: 1696273398.426711737
	I1002 19:03:18.447568  157637 fix.go:219] Guest: 2023-10-02 19:03:18.426711737 +0000 UTC Remote: 2023-10-02 19:03:18.318427541 +0000 UTC m=+21.794166425 (delta=108.284196ms)
	I1002 19:03:18.447593  157637 fix.go:190] guest clock delta is within tolerance: 108.284196ms
	I1002 19:03:18.447599  157637 start.go:83] releasing machines lock for "kubernetes-upgrade-440505", held for 21.767529538s
	I1002 19:03:18.447629  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:03:18.447954  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetIP
	I1002 19:03:18.450989  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.451496  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:18.451537  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.451805  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:03:18.452493  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:03:18.452697  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .DriverName
	I1002 19:03:18.452793  157637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:03:18.452860  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:18.453097  157637 ssh_runner.go:195] Run: cat /version.json
	I1002 19:03:18.453126  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHHostname
	I1002 19:03:18.455900  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.455948  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.456284  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:18.456345  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:58:26:e9", ip: ""} in network mk-kubernetes-upgrade-440505: {Iface:virbr4 ExpiryTime:2023-10-02 20:02:00 +0000 UTC Type:0 Mac:52:54:00:58:26:e9 Iaid: IPaddr:192.168.83.201 Prefix:24 Hostname:kubernetes-upgrade-440505 Clientid:01:52:54:00:58:26:e9}
	I1002 19:03:18.456370  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.456391  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) DBG | domain kubernetes-upgrade-440505 has defined IP address 192.168.83.201 and MAC address 52:54:00:58:26:e9 in network mk-kubernetes-upgrade-440505
	I1002 19:03:18.456494  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:18.456526  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHPort
	I1002 19:03:18.456699  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:18.456701  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHKeyPath
	I1002 19:03:18.456851  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:18.457034  157637 main.go:141] libmachine: (kubernetes-upgrade-440505) Calling .GetSSHUsername
	I1002 19:03:18.457041  157637 sshutil.go:53] new ssh client: &{IP:192.168.83.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/kubernetes-upgrade-440505/id_rsa Username:docker}
	I1002 19:03:18.457191  157637 sshutil.go:53] new ssh client: &{IP:192.168.83.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/kubernetes-upgrade-440505/id_rsa Username:docker}
	I1002 19:03:18.574696  157637 ssh_runner.go:195] Run: systemctl --version
	I1002 19:03:18.581151  157637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 19:03:18.587164  157637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:03:18.587246  157637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1002 19:03:18.597765  157637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1002 19:03:18.616670  157637 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 19:03:18.616713  157637 start.go:469] detecting cgroup driver to use...
	I1002 19:03:18.616870  157637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:03:18.646014  157637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 19:03:18.667229  157637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 19:03:18.681121  157637 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 19:03:18.681194  157637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 19:03:18.691522  157637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:03:18.701299  157637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 19:03:18.710754  157637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:03:18.723358  157637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:03:18.734616  157637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 19:03:18.745463  157637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:03:18.758397  157637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:03:18.767733  157637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:03:18.892200  157637 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 19:03:18.912232  157637 start.go:469] detecting cgroup driver to use...
	I1002 19:03:18.912336  157637 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 19:03:18.930241  157637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:03:18.944918  157637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:03:18.969526  157637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:03:18.985884  157637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:03:19.002231  157637 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 19:03:19.037527  157637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:03:19.050870  157637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:03:19.068817  157637 ssh_runner.go:195] Run: which cri-dockerd
	I1002 19:03:19.073284  157637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 19:03:19.084378  157637 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 19:03:19.102413  157637 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 19:03:19.216487  157637 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 19:03:19.348231  157637 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 19:03:19.348461  157637 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 19:03:19.365810  157637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:03:19.490220  157637 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:03:21.224973  157637 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.734705061s)
	I1002 19:03:21.225051  157637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:03:21.360334  157637 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 19:03:21.492069  157637 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:03:21.635382  157637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:03:21.756248  157637 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 19:03:21.778102  157637 out.go:177] 
	W1002 19:03:21.779832  157637 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1002 19:03:21.779856  157637 out.go:239] * 
	* 
	W1002 19:03:21.781091  157637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1002 19:03:21.783011  157637 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:258: failed to upgrade with newest k8s version. args: out/minikube-linux-amd64 start -p kubernetes-upgrade-440505 --memory=2200 --kubernetes-version=v1.28.2 --alsologtostderr -v=1 --driver=kvm2  : exit status 90
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-440505 version --output=json
version_upgrade_test.go:261: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-440505 version --output=json: exit status 1 (62.613869ms)

                                                
                                                
** stderr ** 
	error: context "kubernetes-upgrade-440505" does not exist

                                                
                                                
** /stderr **
version_upgrade_test.go:263: error running kubectl: exit status 1
panic.go:523: *** TestKubernetesUpgrade FAILED at 2023-10-02 19:03:21.863524217 +0000 UTC m=+2391.754708718
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-440505 -n kubernetes-upgrade-440505
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-440505 -n kubernetes-upgrade-440505: exit status 6 (258.605693ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1002 19:03:22.106739  157867 status.go:415] kubeconfig endpoint: extract IP: "kubernetes-upgrade-440505" does not appear in /home/jenkins/minikube-integration/17339-126802/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "kubernetes-upgrade-440505" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "kubernetes-upgrade-440505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-440505
E1002 19:03:22.171827  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-440505: (1.152270004s)
--- FAIL: TestKubernetesUpgrade (99.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-695840 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-695840 "sudo crictl images -o json": exit status 1 (228.460265ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-695840 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
[]string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-695840 -n old-k8s-version-695840
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-695840 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-695840 logs -n 25: (1.052876109s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-153772                                  | embed-certs-153772           | jenkins | v1.31.2 | 02 Oct 23 19:15 UTC | 02 Oct 23 19:15 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-695840             | old-k8s-version-695840       | jenkins | v1.31.2 | 02 Oct 23 19:15 UTC | 02 Oct 23 19:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-695840                              | old-k8s-version-695840       | jenkins | v1.31.2 | 02 Oct 23 19:15 UTC | 02 Oct 23 19:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-153772                 | embed-certs-153772           | jenkins | v1.31.2 | 02 Oct 23 19:15 UTC | 02 Oct 23 19:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-153772                                  | embed-certs-153772           | jenkins | v1.31.2 | 02 Oct 23 19:15 UTC | 02 Oct 23 19:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| ssh     | -p no-preload-680492 sudo                              | no-preload-680492            | jenkins | v1.31.2 | 02 Oct 23 19:20 UTC | 02 Oct 23 19:20 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p no-preload-680492                                   | no-preload-680492            | jenkins | v1.31.2 | 02 Oct 23 19:20 UTC | 02 Oct 23 19:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p no-preload-680492                                   | no-preload-680492            | jenkins | v1.31.2 | 02 Oct 23 19:20 UTC | 02 Oct 23 19:20 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p no-preload-680492                                   | no-preload-680492            | jenkins | v1.31.2 | 02 Oct 23 19:20 UTC | 02 Oct 23 19:20 UTC |
	| delete  | -p no-preload-680492                                   | no-preload-680492            | jenkins | v1.31.2 | 02 Oct 23 19:20 UTC | 02 Oct 23 19:20 UTC |
	| delete  | -p                                                     | disable-driver-mounts-475939 | jenkins | v1.31.2 | 02 Oct 23 19:20 UTC | 02 Oct 23 19:20 UTC |
	|         | disable-driver-mounts-475939                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075364 | jenkins | v1.31.2 | 02 Oct 23 19:20 UTC | 02 Oct 23 19:21 UTC |
	|         | default-k8s-diff-port-075364                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| start   | -p stopped-upgrade-817564                              | stopped-upgrade-817564       | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:22 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-075364  | default-k8s-diff-port-075364 | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:21 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-075364 | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:21 UTC |
	|         | default-k8s-diff-port-075364                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-153772 sudo                             | embed-certs-153772           | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:21 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p embed-certs-153772                                  | embed-certs-153772           | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:21 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p embed-certs-153772                                  | embed-certs-153772           | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:21 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p embed-certs-153772                                  | embed-certs-153772           | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:21 UTC |
	| addons  | enable dashboard -p default-k8s-diff-port-075364       | default-k8s-diff-port-075364 | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:21 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-075364 | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC |                     |
	|         | default-k8s-diff-port-075364                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.2                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-153772                                  | embed-certs-153772           | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC | 02 Oct 23 19:21 UTC |
	| start   | -p newest-cni-962509 --memory=2200 --alsologtostderr   | newest-cni-962509            | jenkins | v1.31.2 | 02 Oct 23 19:21 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --kubernetes-version=v1.28.2            |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-817564                              | stopped-upgrade-817564       | jenkins | v1.31.2 | 02 Oct 23 19:22 UTC | 02 Oct 23 19:22 UTC |
	| ssh     | -p old-k8s-version-695840 sudo                         | old-k8s-version-695840       | jenkins | v1.31.2 | 02 Oct 23 19:22 UTC |                     |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 19:21:57
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 19:21:57.443097  181123 out.go:296] Setting OutFile to fd 1 ...
	I1002 19:21:57.443332  181123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:21:57.443346  181123 out.go:309] Setting ErrFile to fd 2...
	I1002 19:21:57.443354  181123 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 19:21:57.443626  181123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
	I1002 19:21:57.444365  181123 out.go:303] Setting JSON to false
	I1002 19:21:57.445395  181123 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":7464,"bootTime":1696267054,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 19:21:57.445471  181123 start.go:138] virtualization: kvm guest
	I1002 19:21:57.448079  181123 out.go:177] * [newest-cni-962509] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 19:21:57.449749  181123 out.go:177]   - MINIKUBE_LOCATION=17339
	I1002 19:21:57.449748  181123 notify.go:220] Checking for updates...
	I1002 19:21:57.451407  181123 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 19:21:57.453208  181123 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 19:21:57.454875  181123 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	I1002 19:21:57.456240  181123 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 19:21:57.457663  181123 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 19:21:57.459697  181123 config.go:182] Loaded profile config "default-k8s-diff-port-075364": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:21:57.459871  181123 config.go:182] Loaded profile config "old-k8s-version-695840": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1002 19:21:57.459989  181123 config.go:182] Loaded profile config "stopped-upgrade-817564": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1002 19:21:57.460094  181123 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 19:21:57.496527  181123 out.go:177] * Using the kvm2 driver based on user configuration
	I1002 19:21:57.497954  181123 start.go:298] selected driver: kvm2
	I1002 19:21:57.497972  181123 start.go:902] validating driver "kvm2" against <nil>
	I1002 19:21:57.497989  181123 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 19:21:57.498743  181123 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:21:57.498841  181123 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17339-126802/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 19:21:57.516146  181123 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 19:21:57.516229  181123 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W1002 19:21:57.516269  181123 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1002 19:21:57.516588  181123 start_flags.go:942] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1002 19:21:57.516655  181123 cni.go:84] Creating CNI manager for ""
	I1002 19:21:57.516677  181123 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:21:57.516690  181123 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1002 19:21:57.516699  181123 start_flags.go:321] config:
	{Name:newest-cni-962509 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-962509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:21:57.516910  181123 iso.go:125] acquiring lock: {Name:mkf7650ebae79a7eed75eeedd5ceff434d4c4f84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:21:57.519928  181123 out.go:177] * Starting control plane node newest-cni-962509 in cluster newest-cni-962509
	I1002 19:21:57.087327  181025 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 19:21:57.087374  181025 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 19:21:57.087385  181025 cache.go:57] Caching tarball of preloaded images
	I1002 19:21:57.087460  181025 preload.go:174] Found /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 19:21:57.087473  181025 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 19:21:57.087637  181025 profile.go:148] Saving config to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/config.json ...
	I1002 19:21:57.087919  181025 start.go:365] acquiring machines lock for default-k8s-diff-port-075364: {Name:mk379ca3a60ac28865ffc8c5642c99eadc78dc32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 19:21:57.521530  181123 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 19:21:57.521586  181123 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 19:21:57.521597  181123 cache.go:57] Caching tarball of preloaded images
	I1002 19:21:57.521720  181123 preload.go:174] Found /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1002 19:21:57.521736  181123 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 19:21:57.521870  181123 profile.go:148] Saving config to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/newest-cni-962509/config.json ...
	I1002 19:21:57.521897  181123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/newest-cni-962509/config.json: {Name:mkb406a2c72f9bdedc58f9df8dad7a6d6f1b89da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:21:57.522543  181123 start.go:365] acquiring machines lock for newest-cni-962509: {Name:mk379ca3a60ac28865ffc8c5642c99eadc78dc32 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1002 19:22:01.712849  177676 system_pods.go:86] 5 kube-system pods found
	I1002 19:22:01.712879  177676 system_pods.go:89] "coredns-5644d7b6d9-7vpzf" [cc95497a-3665-4e21-b63b-408a8f7f0766] Running
	I1002 19:22:01.712884  177676 system_pods.go:89] "coredns-5644d7b6d9-fds62" [d177e14e-9a63-4e17-8c35-c8a3ce2dcdfd] Running
	I1002 19:22:01.712888  177676 system_pods.go:89] "kube-proxy-hh4zl" [6c68f0ca-3cd1-4ec9-87f7-2a8e90ff96aa] Running
	I1002 19:22:01.712896  177676 system_pods.go:89] "metrics-server-74d5856cc6-fjpwr" [2b32ffb1-a767-4f21-bec7-21cdc20f6af6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 19:22:01.712900  177676 system_pods.go:89] "storage-provisioner" [636174e3-a913-4389-9e84-17569a9587bd] Running
	I1002 19:22:01.712917  177676 retry.go:31] will retry after 7.804675404s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 19:22:03.482789  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:03.483434  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | unable to find current IP address of domain stopped-upgrade-817564 in network minikube-net
	I1002 19:22:03.483461  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | I1002 19:22:03.483386  180470 retry.go:31] will retry after 6.682839702s: waiting for machine to come up
	I1002 19:22:09.523487  177676 system_pods.go:86] 6 kube-system pods found
	I1002 19:22:09.523519  177676 system_pods.go:89] "coredns-5644d7b6d9-7vpzf" [cc95497a-3665-4e21-b63b-408a8f7f0766] Running
	I1002 19:22:09.523527  177676 system_pods.go:89] "coredns-5644d7b6d9-fds62" [d177e14e-9a63-4e17-8c35-c8a3ce2dcdfd] Running
	I1002 19:22:09.523531  177676 system_pods.go:89] "etcd-old-k8s-version-695840" [3649693e-d086-4d87-abda-b0030509bf34] Running
	I1002 19:22:09.523535  177676 system_pods.go:89] "kube-proxy-hh4zl" [6c68f0ca-3cd1-4ec9-87f7-2a8e90ff96aa] Running
	I1002 19:22:09.523541  177676 system_pods.go:89] "metrics-server-74d5856cc6-fjpwr" [2b32ffb1-a767-4f21-bec7-21cdc20f6af6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 19:22:09.523547  177676 system_pods.go:89] "storage-provisioner" [636174e3-a913-4389-9e84-17569a9587bd] Running
	I1002 19:22:09.523565  177676 retry.go:31] will retry after 7.668625901s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 19:22:10.167611  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.168142  180432 main.go:141] libmachine: (stopped-upgrade-817564) Found IP for machine: 192.168.50.5
	I1002 19:22:10.168167  180432 main.go:141] libmachine: (stopped-upgrade-817564) Reserving static IP address...
	I1002 19:22:10.168179  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has current primary IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.168687  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "stopped-upgrade-817564", mac: "52:54:00:29:d1:77", ip: "192.168.50.5"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.168742  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-817564", mac: "52:54:00:29:d1:77", ip: "192.168.50.5"}
	I1002 19:22:10.168755  180432 main.go:141] libmachine: (stopped-upgrade-817564) Reserved static IP address: 192.168.50.5
	I1002 19:22:10.168774  180432 main.go:141] libmachine: (stopped-upgrade-817564) Waiting for SSH to be available...
	I1002 19:22:10.168805  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | Getting to WaitForSSH function...
	I1002 19:22:10.171580  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.171978  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.172012  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.172144  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | Using SSH client type: external
	I1002 19:22:10.172181  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | Using SSH private key: /home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa (-rw-------)
	I1002 19:22:10.172221  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.5 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 19:22:10.172242  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | About to run SSH command:
	I1002 19:22:10.172255  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | exit 0
	I1002 19:22:10.300973  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | SSH cmd err, output: <nil>: 
	I1002 19:22:10.301303  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetConfigRaw
	I1002 19:22:10.301980  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetIP
	I1002 19:22:10.304881  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.305271  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.305307  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.305680  180432 profile.go:148] Saving config to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/config.json ...
	I1002 19:22:10.305936  180432 machine.go:88] provisioning docker machine ...
	I1002 19:22:10.305957  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:10.306197  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetMachineName
	I1002 19:22:10.306383  180432 buildroot.go:166] provisioning hostname "stopped-upgrade-817564"
	I1002 19:22:10.306400  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetMachineName
	I1002 19:22:10.306541  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:10.308869  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.309281  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.309305  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.309439  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:10.309724  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.309913  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.310048  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:10.310213  180432 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:10.310548  180432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I1002 19:22:10.310562  180432 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-817564 && echo "stopped-upgrade-817564" | sudo tee /etc/hostname
	I1002 19:22:10.431642  180432 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-817564
	
	I1002 19:22:10.431674  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:10.434670  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.435018  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.435053  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.435169  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:10.435378  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.435591  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.435781  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:10.435968  180432 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:10.436403  180432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I1002 19:22:10.436437  180432 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-817564' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-817564/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-817564' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:22:10.558294  180432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:22:10.558346  180432 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17339-126802/.minikube CaCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17339-126802/.minikube}
	I1002 19:22:10.558377  180432 buildroot.go:174] setting up certificates
	I1002 19:22:10.558394  180432 provision.go:83] configureAuth start
	I1002 19:22:10.558409  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetMachineName
	I1002 19:22:10.558729  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetIP
	I1002 19:22:10.561659  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.562126  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.562169  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.562321  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:10.564912  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.565313  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.565352  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.565526  180432 provision.go:138] copyHostCerts
	I1002 19:22:10.565605  180432 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem, removing ...
	I1002 19:22:10.565620  180432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem
	I1002 19:22:10.565714  180432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem (1082 bytes)
	I1002 19:22:10.565905  180432 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem, removing ...
	I1002 19:22:10.565919  180432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem
	I1002 19:22:10.565961  180432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem (1123 bytes)
	I1002 19:22:10.566034  180432 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem, removing ...
	I1002 19:22:10.566045  180432 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem
	I1002 19:22:10.566077  180432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem (1679 bytes)
	I1002 19:22:10.566136  180432 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-817564 san=[192.168.50.5 192.168.50.5 localhost 127.0.0.1 minikube stopped-upgrade-817564]
	I1002 19:22:10.717010  180432 provision.go:172] copyRemoteCerts
	I1002 19:22:10.717075  180432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:22:10.717102  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:10.719533  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.719880  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.719924  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.720179  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:10.720399  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.720623  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:10.720774  180432 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa Username:docker}
	I1002 19:22:10.803727  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 19:22:10.816398  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1002 19:22:10.829121  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 19:22:10.842865  180432 provision.go:86] duration metric: configureAuth took 284.454218ms
	I1002 19:22:10.842895  180432 buildroot.go:189] setting minikube options for container-runtime
	I1002 19:22:10.843141  180432 config.go:182] Loaded profile config "stopped-upgrade-817564": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1002 19:22:10.843175  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:10.843524  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:10.846425  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.846807  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.846847  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.846992  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:10.847211  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.847399  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.847568  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:10.847777  180432 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:10.848105  180432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I1002 19:22:10.848118  180432 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 19:22:10.962452  180432 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 19:22:10.962483  180432 buildroot.go:70] root file system type: tmpfs
	I1002 19:22:10.962622  180432 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 19:22:10.962654  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:10.965156  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.965577  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:10.965616  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:10.965799  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:10.965982  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.966145  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:10.966298  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:10.966446  180432 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:10.966754  180432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I1002 19:22:10.966832  180432 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 19:22:11.091610  180432 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 19:22:11.091656  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:11.094510  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:11.095050  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:11.095090  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:11.095325  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:11.095540  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:11.095732  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:11.095894  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:11.096101  180432 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:11.096456  180432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I1002 19:22:11.096476  180432 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 19:22:12.306069  181025 start.go:369] acquired machines lock for "default-k8s-diff-port-075364" in 15.218105022s
	I1002 19:22:12.306123  181025 start.go:96] Skipping create...Using existing machine configuration
	I1002 19:22:12.306143  181025 fix.go:54] fixHost starting: 
	I1002 19:22:12.306589  181025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:12.306637  181025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:12.323228  181025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37381
	I1002 19:22:12.323714  181025 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:12.324330  181025 main.go:141] libmachine: Using API Version  1
	I1002 19:22:12.324362  181025 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:12.324761  181025 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:12.324951  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	I1002 19:22:12.325154  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetState
	I1002 19:22:12.326998  181025 fix.go:102] recreateIfNeeded on default-k8s-diff-port-075364: state=Stopped err=<nil>
	I1002 19:22:12.327035  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	W1002 19:22:12.327218  181025 fix.go:128] unexpected machine state, will restart: <nil>
	I1002 19:22:12.329281  181025 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-075364" ...
	I1002 19:22:12.071558  180432 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 19:22:12.071589  180432 machine.go:91] provisioned docker machine in 1.765636814s
	I1002 19:22:12.071601  180432 start.go:300] post-start starting for "stopped-upgrade-817564" (driver="kvm2")
	I1002 19:22:12.071613  180432 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:22:12.071629  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:12.072006  180432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:22:12.072048  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:12.074887  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.075261  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:12.075302  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.075384  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:12.075570  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:12.075761  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:12.075931  180432 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa Username:docker}
	I1002 19:22:12.160656  180432 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:22:12.164999  180432 info.go:137] Remote host: Buildroot 2019.02.7
	I1002 19:22:12.165028  180432 filesync.go:126] Scanning /home/jenkins/minikube-integration/17339-126802/.minikube/addons for local assets ...
	I1002 19:22:12.165107  180432 filesync.go:126] Scanning /home/jenkins/minikube-integration/17339-126802/.minikube/files for local assets ...
	I1002 19:22:12.165196  180432 filesync.go:149] local asset: /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem -> 1340252.pem in /etc/ssl/certs
	I1002 19:22:12.165320  180432 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 19:22:12.171649  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem --> /etc/ssl/certs/1340252.pem (1708 bytes)
	I1002 19:22:12.185495  180432 start.go:303] post-start completed in 113.875275ms
	I1002 19:22:12.185519  180432 fix.go:56] fixHost completed within 39.615473184s
	I1002 19:22:12.185541  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:12.188857  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.189212  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:12.189251  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.189458  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:12.189675  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:12.189902  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:12.190068  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:12.190275  180432 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:12.190743  180432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.50.5 22 <nil> <nil>}
	I1002 19:22:12.190761  180432 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 19:22:12.305903  180432 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696274532.258889393
	
	I1002 19:22:12.305929  180432 fix.go:206] guest clock: 1696274532.258889393
	I1002 19:22:12.305937  180432 fix.go:219] Guest: 2023-10-02 19:22:12.258889393 +0000 UTC Remote: 2023-10-02 19:22:12.185523239 +0000 UTC m=+40.216215297 (delta=73.366154ms)
	I1002 19:22:12.305978  180432 fix.go:190] guest clock delta is within tolerance: 73.366154ms
	I1002 19:22:12.305983  180432 start.go:83] releasing machines lock for "stopped-upgrade-817564", held for 39.735958422s
	I1002 19:22:12.306012  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:12.306317  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetIP
	I1002 19:22:12.309206  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.309649  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:12.309711  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.309959  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:12.310540  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:12.310736  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:12.310837  180432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:22:12.310894  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:12.311051  180432 ssh_runner.go:195] Run: cat /version.json
	I1002 19:22:12.311084  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:12.313872  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.313963  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.314237  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:12.314303  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.314351  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:12.314375  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:12.314552  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:12.314629  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:12.314716  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:12.314795  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:12.314870  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:12.314933  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:12.315001  180432 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa Username:docker}
	I1002 19:22:12.315069  180432 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa Username:docker}
	W1002 19:22:12.424453  180432 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1002 19:22:12.424538  180432 ssh_runner.go:195] Run: systemctl --version
	I1002 19:22:12.429696  180432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 19:22:12.435141  180432 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:22:12.435229  180432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I1002 19:22:12.440251  180432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I1002 19:22:12.444949  180432 cni.go:305] no active bridge cni configs found in "/etc/cni/net.d" - nothing to configure
	I1002 19:22:12.444983  180432 start.go:469] detecting cgroup driver to use...
	I1002 19:22:12.445120  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:22:12.455600  180432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.1"|' /etc/containerd/config.toml"
	I1002 19:22:12.461464  180432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 19:22:12.467129  180432 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 19:22:12.467203  180432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 19:22:12.473156  180432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:22:12.479185  180432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 19:22:12.484850  180432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:22:12.490604  180432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:22:12.497134  180432 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 19:22:12.502846  180432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:22:12.507946  180432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:22:12.513123  180432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:22:12.585915  180432 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 19:22:12.600277  180432 start.go:469] detecting cgroup driver to use...
	I1002 19:22:12.600378  180432 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 19:22:12.610628  180432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:22:12.618781  180432 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:22:12.635508  180432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:22:12.647552  180432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:22:12.658962  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:22:12.671164  180432 ssh_runner.go:195] Run: which cri-dockerd
	I1002 19:22:12.674810  180432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 19:22:12.680818  180432 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 19:22:12.691360  180432 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 19:22:12.767776  180432 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 19:22:12.864788  180432 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 19:22:12.864961  180432 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 19:22:12.874834  180432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:22:12.946024  180432 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:22:14.499482  180432 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.553421421s)
	I1002 19:22:14.499567  180432 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:22:14.541600  180432 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:22:12.330976  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .Start
	I1002 19:22:12.331169  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Ensuring networks are active...
	I1002 19:22:12.331939  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Ensuring network default is active
	I1002 19:22:12.332233  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Ensuring network mk-default-k8s-diff-port-075364 is active
	I1002 19:22:12.332577  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Getting domain xml...
	I1002 19:22:12.333262  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Creating domain...
	I1002 19:22:13.643124  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Waiting to get IP...
	I1002 19:22:13.643954  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:13.644493  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:13.644559  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:13.644466  181192 retry.go:31] will retry after 199.51256ms: waiting for machine to come up
	I1002 19:22:13.846295  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:13.846981  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:13.847013  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:13.846938  181192 retry.go:31] will retry after 346.331197ms: waiting for machine to come up
	I1002 19:22:14.194726  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:14.195248  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:14.195285  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:14.195182  181192 retry.go:31] will retry after 389.792564ms: waiting for machine to come up
	I1002 19:22:14.586921  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:14.587536  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:14.587577  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:14.587503  181192 retry.go:31] will retry after 549.387182ms: waiting for machine to come up
	I1002 19:22:15.138239  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:15.138783  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:15.138814  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:15.138728  181192 retry.go:31] will retry after 752.178244ms: waiting for machine to come up
	I1002 19:22:15.892185  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:15.892881  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:15.892916  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:15.892813  181192 retry.go:31] will retry after 741.705472ms: waiting for machine to come up
	I1002 19:22:16.635644  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:16.636152  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:16.636189  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:16.636082  181192 retry.go:31] will retry after 1.058959151s: waiting for machine to come up
	I1002 19:22:14.581420  180432 out.go:204] * Preparing Kubernetes v1.17.0 on Docker 19.03.5 ...
	I1002 19:22:14.581468  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetIP
	I1002 19:22:14.584970  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:14.585558  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:14.585596  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:14.585861  180432 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1002 19:22:14.590379  180432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:22:14.599375  180432 localpath.go:92] copying /home/jenkins/minikube-integration/17339-126802/.minikube/client.crt -> /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/client.crt
	I1002 19:22:14.599557  180432 localpath.go:117] copying /home/jenkins/minikube-integration/17339-126802/.minikube/client.key -> /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/client.key
	I1002 19:22:14.599684  180432 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime docker
	I1002 19:22:14.599722  180432 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:22:14.635545  180432 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	k8s.gcr.io/kube-proxy:v1.17.0
	k8s.gcr.io/kube-apiserver:v1.17.0
	k8s.gcr.io/kube-controller-manager:v1.17.0
	k8s.gcr.io/kube-scheduler:v1.17.0
	kubernetesui/dashboard:v2.0.0-beta8
	k8s.gcr.io/coredns:1.6.5
	k8s.gcr.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	k8s.gcr.io/kube-addon-manager:v9.0.2
	k8s.gcr.io/pause:3.1
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I1002 19:22:14.635578  180432 docker.go:670] registry.k8s.io/kube-apiserver:v1.17.0 wasn't preloaded
	I1002 19:22:14.635588  180432 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.17.0 registry.k8s.io/kube-controller-manager:v1.17.0 registry.k8s.io/kube-scheduler:v1.17.0 registry.k8s.io/kube-proxy:v1.17.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.5 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1002 19:22:14.636876  180432 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1002 19:22:14.636989  180432 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1002 19:22:14.637134  180432 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1002 19:22:14.637162  180432 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1002 19:22:14.637223  180432 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:22:14.637310  180432 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1002 19:22:14.637335  180432 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1002 19:22:14.637685  180432 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1002 19:22:14.637737  180432 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1002 19:22:14.638064  180432 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1002 19:22:14.638084  180432 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:22:14.638110  180432 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1002 19:22:14.638151  180432 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1002 19:22:14.638296  180432 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1002 19:22:14.638314  180432 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1002 19:22:14.638755  180432 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1002 19:22:14.875579  180432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I1002 19:22:14.883064  180432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.17.0
	I1002 19:22:14.885148  180432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1002 19:22:14.921273  180432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.17.0
	I1002 19:22:14.923558  180432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.17.0
	I1002 19:22:14.929053  180432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.17.0
	I1002 19:22:14.942180  180432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.5
	I1002 19:22:14.968091  180432 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I1002 19:22:14.968156  180432 docker.go:317] Removing image: registry.k8s.io/pause:3.1
	I1002 19:22:14.968213  180432 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.17.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.17.0" does not exist at hash "0cae8d5cc64c7d8fbdf73ee2be36c77fdabd9e0c7d30da0c12aedf402730bbb2" in container runtime
	I1002 19:22:14.968236  180432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.1
	I1002 19:22:14.968258  180432 docker.go:317] Removing image: registry.k8s.io/kube-apiserver:v1.17.0
	I1002 19:22:14.968302  180432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.17.0
	I1002 19:22:14.986805  180432 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1002 19:22:14.986865  180432 docker.go:317] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1002 19:22:14.986911  180432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.4.3-0
	I1002 19:22:15.049506  180432 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.17.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.17.0" does not exist at hash "5eb3b7486872441e0943f6e14e9dd5cc1c70bc3047efacbc43d1aa9b7d5b3056" in container runtime
	I1002 19:22:15.049570  180432 docker.go:317] Removing image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1002 19:22:15.049635  180432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.17.0
	I1002 19:22:15.058372  180432 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.17.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.17.0" does not exist at hash "78c190f736b115876724580513fdf37fa4c3984559dc9e90372b11c21b9cad28" in container runtime
	I1002 19:22:15.058432  180432 docker.go:317] Removing image: registry.k8s.io/kube-scheduler:v1.17.0
	I1002 19:22:15.058483  180432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.17.0
	I1002 19:22:15.114587  180432 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.17.0" needs transfer: "registry.k8s.io/kube-proxy:v1.17.0" does not exist at hash "7d54289267dc5a115f940e8b1ea5c20483a5da5ae5bb3ad80107409ed1400f19" in container runtime
	I1002 19:22:15.114651  180432 docker.go:317] Removing image: registry.k8s.io/kube-proxy:v1.17.0
	I1002 19:22:15.114706  180432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.17.0
	I1002 19:22:15.124989  180432 cache_images.go:116] "registry.k8s.io/coredns:1.6.5" needs transfer: "registry.k8s.io/coredns:1.6.5" does not exist at hash "70f311871ae12c14bd0e02028f249f933f925e4370744e4e35f706da773a8f61" in container runtime
	I1002 19:22:15.125047  180432 docker.go:317] Removing image: registry.k8s.io/coredns:1.6.5
	I1002 19:22:15.125089  180432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1002 19:22:15.125099  180432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1002 19:22:15.125137  180432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1002 19:22:15.125099  180432 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns:1.6.5
	I1002 19:22:15.125192  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I1002 19:22:15.125194  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.17.0
	I1002 19:22:15.125200  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.4.3-0
	I1002 19:22:15.176390  180432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1002 19:22:15.176517  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I1002 19:22:15.200753  180432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1002 19:22:15.200876  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.17.0
	I1002 19:22:15.206701  180432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1002 19:22:15.206812  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.17.0
	I1002 19:22:15.212604  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 --> /var/lib/minikube/images/kube-apiserver_v1.17.0 (50629632 bytes)
	I1002 19:22:15.212629  180432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1002 19:22:15.212705  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 --> /var/lib/minikube/images/etcd_3.4.3-0 (100950016 bytes)
	I1002 19:22:15.212773  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_1.6.5
	I1002 19:22:15.212799  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 --> /var/lib/minikube/images/kube-controller-manager_v1.17.0 (48791552 bytes)
	I1002 19:22:15.212717  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 --> /var/lib/minikube/images/pause_3.1 (318976 bytes)
	I1002 19:22:15.221606  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 --> /var/lib/minikube/images/kube-scheduler_v1.17.0 (33822208 bytes)
	I1002 19:22:15.223032  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 --> /var/lib/minikube/images/kube-proxy_v1.17.0 (48705536 bytes)
	I1002 19:22:15.226973  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 --> /var/lib/minikube/images/coredns_1.6.5 (13241856 bytes)
	I1002 19:22:15.264773  180432 docker.go:284] Loading image: /var/lib/minikube/images/pause_3.1
	I1002 19:22:15.264808  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.1 | docker load"
	I1002 19:22:15.502610  180432 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:22:15.505687  180432 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I1002 19:22:15.619267  180432 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1002 19:22:15.619336  180432 docker.go:317] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:22:15.619397  180432 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:22:15.672906  180432 docker.go:284] Loading image: /var/lib/minikube/images/coredns_1.6.5
	I1002 19:22:15.672940  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_1.6.5 | docker load"
	I1002 19:22:15.780660  180432 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1002 19:22:15.780797  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I1002 19:22:16.461993  180432 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 from cache
	I1002 19:22:16.462024  180432 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1002 19:22:16.462047  180432 docker.go:284] Loading image: /var/lib/minikube/images/kube-scheduler_v1.17.0
	I1002 19:22:16.462072  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.17.0 | docker load"
	I1002 19:22:16.462084  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1002 19:22:17.198859  177676 system_pods.go:86] 7 kube-system pods found
	I1002 19:22:17.198900  177676 system_pods.go:89] "coredns-5644d7b6d9-7vpzf" [cc95497a-3665-4e21-b63b-408a8f7f0766] Running
	I1002 19:22:17.198909  177676 system_pods.go:89] "coredns-5644d7b6d9-fds62" [d177e14e-9a63-4e17-8c35-c8a3ce2dcdfd] Running
	I1002 19:22:17.198917  177676 system_pods.go:89] "etcd-old-k8s-version-695840" [3649693e-d086-4d87-abda-b0030509bf34] Running
	I1002 19:22:17.198924  177676 system_pods.go:89] "kube-apiserver-old-k8s-version-695840" [c2a15895-6ede-4286-a937-d816060ccd0c] Pending
	I1002 19:22:17.198930  177676 system_pods.go:89] "kube-proxy-hh4zl" [6c68f0ca-3cd1-4ec9-87f7-2a8e90ff96aa] Running
	I1002 19:22:17.198940  177676 system_pods.go:89] "metrics-server-74d5856cc6-fjpwr" [2b32ffb1-a767-4f21-bec7-21cdc20f6af6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 19:22:17.198950  177676 system_pods.go:89] "storage-provisioner" [636174e3-a913-4389-9e84-17569a9587bd] Running
	I1002 19:22:17.198970  177676 retry.go:31] will retry after 13.676082914s: missing components: kube-apiserver, kube-controller-manager, kube-scheduler
	I1002 19:22:17.696358  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:17.697054  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:17.697093  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:17.696969  181192 retry.go:31] will retry after 1.169150736s: waiting for machine to come up
	I1002 19:22:18.868554  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:18.869126  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:18.869159  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:18.869064  181192 retry.go:31] will retry after 1.134673895s: waiting for machine to come up
	I1002 19:22:20.005078  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:20.005654  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:20.005687  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:20.005598  181192 retry.go:31] will retry after 1.459533031s: waiting for machine to come up
	I1002 19:22:21.467282  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:21.467920  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:21.467955  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:21.467841  181192 retry.go:31] will retry after 2.368104219s: waiting for machine to come up
	I1002 19:22:17.159973  180432 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 from cache
	I1002 19:22:17.160023  180432 docker.go:284] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.17.0
	I1002 19:22:17.160040  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.17.0 | docker load"
	I1002 19:22:17.459776  180432 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 from cache
	I1002 19:22:17.459829  180432 docker.go:284] Loading image: /var/lib/minikube/images/kube-apiserver_v1.17.0
	I1002 19:22:17.459848  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.17.0 | docker load"
	I1002 19:22:17.761429  180432 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 from cache
	I1002 19:22:17.761474  180432 docker.go:284] Loading image: /var/lib/minikube/images/kube-proxy_v1.17.0
	I1002 19:22:17.761492  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.17.0 | docker load"
	I1002 19:22:18.050097  180432 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 from cache
	I1002 19:22:18.050142  180432 docker.go:284] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1002 19:22:18.050159  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1002 19:22:18.660649  180432 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1002 19:22:18.660703  180432 docker.go:284] Loading image: /var/lib/minikube/images/etcd_3.4.3-0
	I1002 19:22:18.660722  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.4.3-0 | docker load"
	I1002 19:22:19.466413  180432 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 from cache
	I1002 19:22:19.466469  180432 cache_images.go:123] Successfully loaded all cached images
	I1002 19:22:19.466477  180432 cache_images.go:92] LoadImages completed in 4.83087701s
	I1002 19:22:19.466545  180432 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 19:22:19.514921  180432 cni.go:84] Creating CNI manager for ""
	I1002 19:22:19.514946  180432 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 19:22:19.514967  180432 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 19:22:19.514989  180432 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.5 APIServerPort:8443 KubernetesVersion:v1.17.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:stopped-upgrade-817564 NodeName:stopped-upgrade-817564 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1002 19:22:19.515172  180432 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "stopped-upgrade-817564"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.17.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 19:22:19.515275  180432 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.17.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=stopped-upgrade-817564 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1002 19:22:19.515330  180432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.17.0
	I1002 19:22:19.522230  180432 binaries.go:47] Didn't find k8s binaries: didn't find preexisting kubectl
	Initiating transfer...
	I1002 19:22:19.522302  180432 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.17.0
	I1002 19:22:19.528577  180432 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubelet.sha256
	I1002 19:22:19.528630  180432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:22:19.528638  180432 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubectl.sha256
	I1002 19:22:19.528726  180432 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.17.0/bin/linux/amd64/kubeadm.sha256
	I1002 19:22:19.528732  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubectl
	I1002 19:22:19.528820  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubeadm
	I1002 19:22:19.542755  180432 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.17.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/binaries/v1.17.0/kubectl': No such file or directory
	I1002 19:22:19.542792  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/linux/amd64/v1.17.0/kubectl --> /var/lib/minikube/binaries/v1.17.0/kubectl (43495424 bytes)
	I1002 19:22:19.542835  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/linux/amd64/v1.17.0/kubeadm --> /var/lib/minikube/binaries/v1.17.0/kubeadm (39342080 bytes)
	I1002 19:22:19.542849  180432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.17.0/kubelet
	I1002 19:22:19.550388  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/cache/linux/amd64/v1.17.0/kubelet --> /var/lib/minikube/binaries/v1.17.0/kubelet (111560216 bytes)
	I1002 19:22:20.234371  180432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 19:22:20.240427  180432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1002 19:22:20.250188  180432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 19:22:20.259584  180432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2118 bytes)
	I1002 19:22:20.269186  180432 ssh_runner.go:195] Run: grep 192.168.50.5	control-plane.minikube.internal$ /etc/hosts
	I1002 19:22:20.272334  180432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.5	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:22:20.280991  180432 certs.go:56] Setting up /home/jenkins/minikube-integration/17339-126802/.minikube/profiles for IP: 192.168.50.5
	I1002 19:22:20.281028  180432 certs.go:190] acquiring lock for shared ca certs: {Name:mk1bad5dcf25e4f2ff7c547e39403ca2e6e2656c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:22:20.281231  180432 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17339-126802/.minikube/ca.key
	I1002 19:22:20.281286  180432 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17339-126802/.minikube/proxy-client-ca.key
	I1002 19:22:20.281410  180432 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/client.key
	I1002 19:22:20.281443  180432 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.key.4e3314a8
	I1002 19:22:20.281462  180432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.crt.4e3314a8 with IP's: [192.168.50.5 10.96.0.1 127.0.0.1 10.0.0.1]
	I1002 19:22:20.386112  180432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.crt.4e3314a8 ...
	I1002 19:22:20.386149  180432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.crt.4e3314a8: {Name:mk89868947d3e0b8acb1bfbc14c4215325810921 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:22:20.386354  180432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.key.4e3314a8 ...
	I1002 19:22:20.386371  180432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.key.4e3314a8: {Name:mk061e1b4bd27f63ea97158906c78e691c22f2c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:22:20.386468  180432 certs.go:337] copying /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.crt.4e3314a8 -> /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.crt
	I1002 19:22:20.386626  180432 certs.go:341] copying /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.key.4e3314a8 -> /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.key
	I1002 19:22:20.386799  180432 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/proxy-client.key
	I1002 19:22:20.386947  180432 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/134025.pem (1338 bytes)
	W1002 19:22:20.386991  180432 certs.go:433] ignoring /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/134025_empty.pem, impossibly tiny 0 bytes
	I1002 19:22:20.387008  180432 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 19:22:20.387056  180432 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem (1082 bytes)
	I1002 19:22:20.387093  180432 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem (1123 bytes)
	I1002 19:22:20.387128  180432 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/key.pem (1679 bytes)
	I1002 19:22:20.387188  180432 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem (1708 bytes)
	I1002 19:22:20.387811  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 19:22:20.403995  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 19:22:20.419300  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 19:22:20.434293  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1002 19:22:20.448262  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 19:22:20.462981  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 19:22:20.477039  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 19:22:20.491688  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 19:22:20.505310  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem --> /usr/share/ca-certificates/1340252.pem (1708 bytes)
	I1002 19:22:20.519845  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 19:22:20.533682  180432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/certs/134025.pem --> /usr/share/ca-certificates/134025.pem (1338 bytes)
	I1002 19:22:20.547634  180432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (774 bytes)
	I1002 19:22:20.557468  180432 ssh_runner.go:195] Run: openssl version
	I1002 19:22:20.563149  180432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134025.pem && ln -fs /usr/share/ca-certificates/134025.pem /etc/ssl/certs/134025.pem"
	I1002 19:22:20.570744  180432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134025.pem
	I1002 19:22:20.575503  180432 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 18:29 /usr/share/ca-certificates/134025.pem
	I1002 19:22:20.575580  180432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134025.pem
	I1002 19:22:20.587264  180432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134025.pem /etc/ssl/certs/51391683.0"
	I1002 19:22:20.594479  180432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1340252.pem && ln -fs /usr/share/ca-certificates/1340252.pem /etc/ssl/certs/1340252.pem"
	I1002 19:22:20.601781  180432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1340252.pem
	I1002 19:22:20.606640  180432 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 18:29 /usr/share/ca-certificates/1340252.pem
	I1002 19:22:20.606727  180432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1340252.pem
	I1002 19:22:20.621244  180432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1340252.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 19:22:20.628878  180432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 19:22:20.636695  180432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:22:20.641344  180432 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 18:24 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:22:20.641433  180432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:22:20.652528  180432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 19:22:20.660045  180432 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 19:22:20.664878  180432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 19:22:20.676298  180432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 19:22:20.687402  180432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 19:22:20.698509  180432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 19:22:20.709989  180432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 19:22:20.721030  180432 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 19:22:20.732549  180432 kubeadm.go:404] StartCluster: {Name:stopped-upgrade-817564 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVe
rsion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.5 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath
: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I1002 19:22:20.732690  180432 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 19:22:20.765588  180432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 19:22:20.772542  180432 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 19:22:20.772582  180432 kubeadm.go:636] restartCluster start
	I1002 19:22:20.772666  180432 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 19:22:20.779928  180432 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:22:20.780652  180432 kubeconfig.go:135] verify returned: extract IP: "stopped-upgrade-817564" does not appear in /home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 19:22:20.780969  180432 kubeconfig.go:146] "stopped-upgrade-817564" context is missing from /home/jenkins/minikube-integration/17339-126802/kubeconfig - will repair!
	I1002 19:22:20.781558  180432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17339-126802/kubeconfig: {Name:mkd33d0e053964abc5732337034b1577498b626d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:22:20.782607  180432 kapi.go:59] client config for stopped-upgrade-817564: &rest.Config{Host:"https://192.168.50.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/client.crt", KeyFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/client.key", CAFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 19:22:20.783617  180432 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 19:22:20.790069  180432 kubeadm.go:602] needs reconfigure: configs differ:
	
	** stderr ** 
	diff: can't stat '/var/tmp/minikube/kubeadm.yaml': No such file or directory
	
	** /stderr **
	I1002 19:22:20.790090  180432 kubeadm.go:1128] stopping kube-system containers ...
	I1002 19:22:20.790151  180432 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 19:22:20.829538  180432 docker.go:463] Stopping containers: [a955252beac4 092071b54e8d 4489cfe62e16 73e972fce186 89632edb7eef 05a4d46140f1 033bf7c09489 b55717a76eb6 2ffd8988c51b 423cdbabab46 9166c9d7c4f3 86965062e356 48286ea34046 e684e42c54c3 dab27d4da13a 8961904d3869 0a41242be6cb aa735913184d]
	I1002 19:22:20.829611  180432 ssh_runner.go:195] Run: docker stop a955252beac4 092071b54e8d 4489cfe62e16 73e972fce186 89632edb7eef 05a4d46140f1 033bf7c09489 b55717a76eb6 2ffd8988c51b 423cdbabab46 9166c9d7c4f3 86965062e356 48286ea34046 e684e42c54c3 dab27d4da13a 8961904d3869 0a41242be6cb aa735913184d
	I1002 19:22:20.865760  180432 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1002 19:22:20.876267  180432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1002 19:22:20.882647  180432 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1002 19:22:20.882795  180432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1002 19:22:20.888657  180432 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1002 19:22:20.888684  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:22:20.954999  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:22:21.801438  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:22:23.838534  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:23.839211  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:23.839246  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:23.839148  181192 retry.go:31] will retry after 2.878867462s: waiting for machine to come up
	I1002 19:22:26.720742  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:26.721383  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:26.721418  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:26.721308  181192 retry.go:31] will retry after 3.036895404s: waiting for machine to come up
	I1002 19:22:22.016106  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:22:22.122610  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:22:22.222655  180432 api_server.go:52] waiting for apiserver process to appear ...
	I1002 19:22:22.222753  180432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:22:22.234197  180432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:22:22.746376  180432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:22:23.246266  180432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:22:23.745388  180432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:22:24.245532  180432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:22:24.745797  180432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 19:22:24.756020  180432 api_server.go:72] duration metric: took 2.533363261s to wait for apiserver process to appear ...
	I1002 19:22:24.756053  180432 api_server.go:88] waiting for apiserver healthz status ...
	I1002 19:22:24.756075  180432 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I1002 19:22:24.756644  180432 api_server.go:269] stopped: https://192.168.50.5:8443/healthz: Get "https://192.168.50.5:8443/healthz": dial tcp 192.168.50.5:8443: connect: connection refused
	I1002 19:22:24.756685  180432 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I1002 19:22:24.757126  180432 api_server.go:269] stopped: https://192.168.50.5:8443/healthz: Get "https://192.168.50.5:8443/healthz": dial tcp 192.168.50.5:8443: connect: connection refused
	I1002 19:22:25.257868  180432 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I1002 19:22:30.885331  177676 system_pods.go:86] 9 kube-system pods found
	I1002 19:22:30.885385  177676 system_pods.go:89] "coredns-5644d7b6d9-7vpzf" [cc95497a-3665-4e21-b63b-408a8f7f0766] Running
	I1002 19:22:30.885395  177676 system_pods.go:89] "coredns-5644d7b6d9-fds62" [d177e14e-9a63-4e17-8c35-c8a3ce2dcdfd] Running
	I1002 19:22:30.885402  177676 system_pods.go:89] "etcd-old-k8s-version-695840" [3649693e-d086-4d87-abda-b0030509bf34] Running
	I1002 19:22:30.885408  177676 system_pods.go:89] "kube-apiserver-old-k8s-version-695840" [c2a15895-6ede-4286-a937-d816060ccd0c] Running
	I1002 19:22:30.885414  177676 system_pods.go:89] "kube-controller-manager-old-k8s-version-695840" [fe7a8428-122b-4bcf-a79e-3b04c246f8d3] Running
	I1002 19:22:30.885421  177676 system_pods.go:89] "kube-proxy-hh4zl" [6c68f0ca-3cd1-4ec9-87f7-2a8e90ff96aa] Running
	I1002 19:22:30.885428  177676 system_pods.go:89] "kube-scheduler-old-k8s-version-695840" [58ac9dd9-fa6e-43ef-863f-5e56f88e28d6] Running
	I1002 19:22:30.885443  177676 system_pods.go:89] "metrics-server-74d5856cc6-fjpwr" [2b32ffb1-a767-4f21-bec7-21cdc20f6af6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1002 19:22:30.885456  177676 system_pods.go:89] "storage-provisioner" [636174e3-a913-4389-9e84-17569a9587bd] Running
	I1002 19:22:30.885471  177676 system_pods.go:126] duration metric: took 56.743558925s to wait for k8s-apps to be running ...
	I1002 19:22:30.885484  177676 system_svc.go:44] waiting for kubelet service to be running ....
	I1002 19:22:30.885546  177676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:22:30.906450  177676 system_svc.go:56] duration metric: took 20.954831ms WaitForService to wait for kubelet.
	I1002 19:22:30.906475  177676 kubeadm.go:581] duration metric: took 1m4.531157366s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1002 19:22:30.906496  177676 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:22:30.910107  177676 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1002 19:22:30.910142  177676 node_conditions.go:123] node cpu capacity is 2
	I1002 19:22:30.910158  177676 node_conditions.go:105] duration metric: took 3.656809ms to run NodePressure ...
	I1002 19:22:30.910175  177676 start.go:228] waiting for startup goroutines ...
	I1002 19:22:30.910189  177676 start.go:233] waiting for cluster config update ...
	I1002 19:22:30.910205  177676 start.go:242] writing updated cluster config ...
	I1002 19:22:30.910579  177676 ssh_runner.go:195] Run: rm -f paused
	I1002 19:22:30.959457  177676 start.go:600] kubectl: 1.28.2, cluster: 1.16.0 (minor skew: 12)
	I1002 19:22:30.961548  177676 out.go:177] 
	W1002 19:22:30.963095  177676 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.16.0.
	I1002 19:22:30.964625  177676 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1002 19:22:30.966658  177676 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-695840" cluster and "default" namespace by default
	I1002 19:22:29.650322  180432 api_server.go:279] https://192.168.50.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 19:22:29.650362  180432 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 19:22:29.650377  180432 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I1002 19:22:29.676320  180432 api_server.go:279] https://192.168.50.5:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1002 19:22:29.676360  180432 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1002 19:22:29.757607  180432 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I1002 19:22:29.782338  180432 api_server.go:279] https://192.168.50.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 19:22:29.782380  180432 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 19:22:30.257985  180432 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I1002 19:22:30.264748  180432 api_server.go:279] https://192.168.50.5:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W1002 19:22:30.264791  180432 api_server.go:103] status: https://192.168.50.5:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I1002 19:22:30.757399  180432 api_server.go:253] Checking apiserver healthz at https://192.168.50.5:8443/healthz ...
	I1002 19:22:30.764198  180432 api_server.go:279] https://192.168.50.5:8443/healthz returned 200:
	ok
	I1002 19:22:30.771062  180432 api_server.go:141] control plane version: v1.17.0
	I1002 19:22:30.771098  180432 api_server.go:131] duration metric: took 6.015035209s to wait for apiserver health ...
	I1002 19:22:30.771110  180432 cni.go:84] Creating CNI manager for ""
	I1002 19:22:30.771125  180432 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 19:22:30.771135  180432 system_pods.go:43] waiting for kube-system pods to appear ...
	I1002 19:22:30.783326  180432 system_pods.go:59] 9 kube-system pods found
	I1002 19:22:30.783365  180432 system_pods.go:61] "coredns-6955765f44-4jhdc" [1a09791a-363c-4b3f-9930-8ee2420902f1] Running
	I1002 19:22:30.783374  180432 system_pods.go:61] "coredns-6955765f44-6rjnr" [ded4a1cc-828b-415f-8900-94a2d5a3dff0] Running
	I1002 19:22:30.783382  180432 system_pods.go:61] "etcd-minikube" [66e36c82-0020-4d37-bb73-881150d040f2] Running
	I1002 19:22:30.783388  180432 system_pods.go:61] "kube-addon-manager-minikube" [5f885415-7132-477c-9814-23c87e08be30] Running
	I1002 19:22:30.783395  180432 system_pods.go:61] "kube-apiserver-minikube" [823eead2-27d3-4016-85d0-7dbe340d9c85] Running
	I1002 19:22:30.783401  180432 system_pods.go:61] "kube-controller-manager-minikube" [f6104d1f-71e3-446e-82bb-ca19e9caf609] Running
	I1002 19:22:30.783409  180432 system_pods.go:61] "kube-proxy-wms5v" [2895a48f-10fe-4f33-a3b1-21e32c7d03e8] Running
	I1002 19:22:30.783415  180432 system_pods.go:61] "kube-scheduler-minikube" [373b4e5c-7103-4c49-96f5-ec8c5e5dec48] Running
	I1002 19:22:30.783423  180432 system_pods.go:61] "storage-provisioner" [e62ad9be-56f8-4288-93d3-ef40430338d0] Running
	I1002 19:22:30.783429  180432 system_pods.go:74] duration metric: took 12.288249ms to wait for pod list to return data ...
	I1002 19:22:30.783443  180432 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:22:30.787163  180432 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
	I1002 19:22:30.787190  180432 node_conditions.go:123] node cpu capacity is 2
	I1002 19:22:30.787201  180432 node_conditions.go:105] duration metric: took 3.752551ms to run NodePressure ...
	I1002 19:22:30.787219  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.17.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1002 19:22:31.069619  180432 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1002 19:22:31.084185  180432 ops.go:34] apiserver oom_adj: -16
	I1002 19:22:31.084211  180432 kubeadm.go:640] restartCluster took 10.311621006s
	I1002 19:22:31.084223  180432 kubeadm.go:406] StartCluster complete in 10.351688939s
	I1002 19:22:31.084245  180432 settings.go:142] acquiring lock: {Name:mk819c25bf042bfc45c91bd0ae1e28747c3c6eda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:22:31.084324  180432 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 19:22:31.085682  180432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17339-126802/kubeconfig: {Name:mkd33d0e053964abc5732337034b1577498b626d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:22:31.085978  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1002 19:22:31.086012  180432 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1002 19:22:31.086088  180432 addons.go:69] Setting storage-provisioner=true in profile "stopped-upgrade-817564"
	I1002 19:22:31.086100  180432 addons.go:69] Setting default-storageclass=true in profile "stopped-upgrade-817564"
	I1002 19:22:31.086114  180432 addons.go:231] Setting addon storage-provisioner=true in "stopped-upgrade-817564"
	I1002 19:22:31.086128  180432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "stopped-upgrade-817564"
	I1002 19:22:31.086164  180432 host.go:66] Checking if "stopped-upgrade-817564" exists ...
	I1002 19:22:31.086195  180432 config.go:182] Loaded profile config "stopped-upgrade-817564": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1002 19:22:31.086280  180432 cache.go:107] acquiring lock: {Name:mk5b8f0938a437c6845d503f7f1cebcd016adc6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 19:22:31.086351  180432 cache.go:115] /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1002 19:22:31.086364  180432 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 88.549µs
	I1002 19:22:31.086378  180432 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17339-126802/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1002 19:22:31.086386  180432 cache.go:87] Successfully saved all images to host disk.
	I1002 19:22:31.086514  180432 config.go:182] Loaded profile config "stopped-upgrade-817564": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.17.0
	I1002 19:22:31.086547  180432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:31.086558  180432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:31.086590  180432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:31.086872  180432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:31.086925  180432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:31.086874  180432 kapi.go:59] client config for stopped-upgrade-817564: &rest.Config{Host:"https://192.168.50.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/client.crt", KeyFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/client.key", CAFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 19:22:31.086995  180432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:31.106495  180432 kapi.go:248] "coredns" deployment in "kube-system" namespace and "stopped-upgrade-817564" context rescaled to 1 replicas
	I1002 19:22:31.106549  180432 start.go:223] Will wait 6m0s for node &{Name:minikube IP:192.168.50.5 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 19:22:31.108841  180432 out.go:177] * Verifying Kubernetes components...
	I1002 19:22:31.106977  180432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44715
	I1002 19:22:31.106977  180432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42577
	I1002 19:22:31.106996  180432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35075
	I1002 19:22:31.110536  180432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 19:22:31.111065  180432 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:31.111106  180432 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:31.111189  180432 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:31.111705  180432 main.go:141] libmachine: Using API Version  1
	I1002 19:22:31.111727  180432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:31.111838  180432 main.go:141] libmachine: Using API Version  1
	I1002 19:22:31.111855  180432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:31.111872  180432 main.go:141] libmachine: Using API Version  1
	I1002 19:22:31.111884  180432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:31.112256  180432 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:31.112293  180432 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:31.112328  180432 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:31.112484  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetState
	I1002 19:22:31.112940  180432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:31.112979  180432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:31.113262  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetState
	I1002 19:22:31.115549  180432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:31.115597  180432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:31.115639  180432 kapi.go:59] client config for stopped-upgrade-817564: &rest.Config{Host:"https://192.168.50.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/client.crt", KeyFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/profiles/stopped-upgrade-817564/client.key", CAFile:"/home/jenkins/minikube-integration/17339-126802/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1bf7540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1002 19:22:31.115877  180432 addons.go:231] Setting addon default-storageclass=true in "stopped-upgrade-817564"
	I1002 19:22:31.115904  180432 host.go:66] Checking if "stopped-upgrade-817564" exists ...
	I1002 19:22:31.116177  180432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:31.116215  180432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:31.133173  180432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38543
	I1002 19:22:31.133698  180432 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:31.134238  180432 main.go:141] libmachine: Using API Version  1
	I1002 19:22:31.134261  180432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:31.136245  180432 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:31.136404  180432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46747
	I1002 19:22:31.136581  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetState
	I1002 19:22:31.136853  180432 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:31.137784  180432 main.go:141] libmachine: Using API Version  1
	I1002 19:22:31.138124  180432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:31.138098  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:31.140583  180432 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1002 19:22:31.138875  180432 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:31.142266  180432 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:22:31.142283  180432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1002 19:22:31.142306  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:31.142345  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:31.142513  180432 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:22:31.142541  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:31.146326  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:31.146922  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:31.146950  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:31.147423  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:31.147739  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:31.148252  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:31.148594  180432 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa Username:docker}
	I1002 19:22:31.149035  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:31.149449  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:31.149514  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:31.149762  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:31.149926  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:31.150062  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:31.150177  180432 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa Username:docker}
	I1002 19:22:31.156784  180432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I1002 19:22:31.157193  180432 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:31.157766  180432 main.go:141] libmachine: Using API Version  1
	I1002 19:22:31.157790  180432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:31.158133  180432 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:31.158688  180432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:31.158734  180432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:31.176580  180432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44391
	I1002 19:22:31.177101  180432 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:31.177687  180432 main.go:141] libmachine: Using API Version  1
	I1002 19:22:31.177703  180432 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:31.178162  180432 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:31.178336  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetState
	I1002 19:22:31.180425  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .DriverName
	I1002 19:22:31.180910  180432 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1002 19:22:31.180925  180432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1002 19:22:31.180941  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHHostname
	I1002 19:22:31.184331  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:31.184786  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:29:d1:77", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-10-02 20:03:56 +0000 UTC Type:0 Mac:52:54:00:29:d1:77 Iaid: IPaddr:192.168.50.5 Prefix:24 Hostname:stopped-upgrade-817564 Clientid:01:52:54:00:29:d1:77}
	I1002 19:22:31.184820  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | domain stopped-upgrade-817564 has defined IP address 192.168.50.5 and MAC address 52:54:00:29:d1:77 in network minikube-net
	I1002 19:22:31.184987  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHPort
	I1002 19:22:31.185194  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHKeyPath
	I1002 19:22:31.185366  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .GetSSHUsername
	I1002 19:22:31.185563  180432 sshutil.go:53] new ssh client: &{IP:192.168.50.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/stopped-upgrade-817564/id_rsa Username:docker}
	I1002 19:22:31.277065  180432 kubeadm.go:518] skip waiting for components based on config.
	I1002 19:22:31.277086  180432 node_conditions.go:102] verifying NodePressure condition ...
	I1002 19:22:31.277212  180432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.17.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1002 19:22:31.280744  180432 node_conditions.go:122] node storage ephemeral capacity is 17784772Ki
	I1002 19:22:31.280768  180432 node_conditions.go:123] node cpu capacity is 2
	I1002 19:22:31.280779  180432 node_conditions.go:105] duration metric: took 3.687545ms to run NodePressure ...
	I1002 19:22:31.280793  180432 start.go:228] waiting for startup goroutines ...
	I1002 19:22:31.300382  180432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1002 19:22:31.333462  180432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1002 19:22:31.395412  180432 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.17.0
	registry.k8s.io/kube-proxy:v1.17.0
	k8s.gcr.io/kube-controller-manager:v1.17.0
	registry.k8s.io/kube-controller-manager:v1.17.0
	k8s.gcr.io/kube-scheduler:v1.17.0
	registry.k8s.io/kube-scheduler:v1.17.0
	k8s.gcr.io/kube-apiserver:v1.17.0
	registry.k8s.io/kube-apiserver:v1.17.0
	kubernetesui/dashboard:v2.0.0-beta8
	k8s.gcr.io/coredns:1.6.5
	registry.k8s.io/coredns:1.6.5
	k8s.gcr.io/etcd:3.4.3-0
	registry.k8s.io/etcd:3.4.3-0
	kubernetesui/metrics-scraper:v1.0.2
	k8s.gcr.io/kube-addon-manager:v9.0.2
	k8s.gcr.io/pause:3.1
	registry.k8s.io/pause:3.1
	gcr.io/k8s-minikube/storage-provisioner:v1.8.1
	
	-- /stdout --
	I1002 19:22:31.395448  180432 cache_images.go:84] Images are preloaded, skipping loading
	I1002 19:22:31.395458  180432 cache_images.go:262] succeeded pushing to: stopped-upgrade-817564
	I1002 19:22:31.395464  180432 cache_images.go:263] failed pushing to: 
	I1002 19:22:31.395493  180432 main.go:141] libmachine: Making call to close driver server
	I1002 19:22:31.395511  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .Close
	I1002 19:22:31.395839  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | Closing plugin on server side
	I1002 19:22:31.395841  180432 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:22:31.395865  180432 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:22:31.395875  180432 main.go:141] libmachine: Making call to close driver server
	I1002 19:22:31.395884  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .Close
	I1002 19:22:31.396128  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | Closing plugin on server side
	I1002 19:22:31.396152  180432 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:22:31.396162  180432 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:22:31.837556  180432 start.go:923] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1002 19:22:31.837616  180432 main.go:141] libmachine: Making call to close driver server
	I1002 19:22:31.837633  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .Close
	I1002 19:22:31.837707  180432 main.go:141] libmachine: Making call to close driver server
	I1002 19:22:31.837732  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .Close
	I1002 19:22:31.837947  180432 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:22:31.837962  180432 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:22:31.837971  180432 main.go:141] libmachine: Making call to close driver server
	I1002 19:22:31.837978  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .Close
	I1002 19:22:31.838108  180432 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:22:31.838114  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | Closing plugin on server side
	I1002 19:22:31.838122  180432 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:22:31.838132  180432 main.go:141] libmachine: Making call to close driver server
	I1002 19:22:31.838141  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .Close
	I1002 19:22:31.838331  180432 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:22:31.838338  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | Closing plugin on server side
	I1002 19:22:31.838350  180432 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:22:31.838336  180432 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:22:31.838354  180432 main.go:141] libmachine: (stopped-upgrade-817564) DBG | Closing plugin on server side
	I1002 19:22:31.838362  180432 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:22:31.845347  180432 main.go:141] libmachine: Making call to close driver server
	I1002 19:22:31.845400  180432 main.go:141] libmachine: (stopped-upgrade-817564) Calling .Close
	I1002 19:22:31.845694  180432 main.go:141] libmachine: Successfully made call to close driver server
	I1002 19:22:31.845709  180432 main.go:141] libmachine: Making call to close connection to plugin binary
	I1002 19:22:31.847397  180432 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1002 19:22:31.848621  180432 addons.go:502] enable addons completed in 762.622758ms: enabled=[storage-provisioner default-storageclass]
	I1002 19:22:31.848656  180432 start.go:233] waiting for cluster config update ...
	I1002 19:22:31.848673  180432 start.go:242] writing updated cluster config ...
	I1002 19:22:31.848919  180432 ssh_runner.go:195] Run: rm -f paused
	I1002 19:22:31.899105  180432 start.go:600] kubectl: 1.28.2, cluster: 1.17.0 (minor skew: 11)
	I1002 19:22:31.900990  180432 out.go:177] 
	W1002 19:22:31.902391  180432 out.go:239] ! /usr/local/bin/kubectl is version 1.28.2, which may have incompatibilities with Kubernetes 1.17.0.
	I1002 19:22:31.903650  180432 out.go:177]   - Want kubectl v1.17.0? Try 'minikube kubectl -- get pods -A'
	I1002 19:22:31.905064  180432 out.go:177] * Done! kubectl is now configured to use "stopped-upgrade-817564" cluster and "" namespace by default
	I1002 19:22:29.761229  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:29.761756  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | unable to find current IP address of domain default-k8s-diff-port-075364 in network mk-default-k8s-diff-port-075364
	I1002 19:22:29.761790  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | I1002 19:22:29.761694  181192 retry.go:31] will retry after 4.766697111s: waiting for machine to come up
	I1002 19:22:36.714331  181123 start.go:369] acquired machines lock for "newest-cni-962509" in 39.191743296s
	I1002 19:22:36.714409  181123 start.go:93] Provisioning new machine with config: &{Name:newest-cni-962509 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.2 ClusterName:newest-cni-962509 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/j
enkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1002 19:22:36.714543  181123 start.go:125] createHost starting for "" (driver="kvm2")
	I1002 19:22:34.530605  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.531280  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has current primary IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.531311  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Found IP for machine: 192.168.72.204
	I1002 19:22:34.531327  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Reserving static IP address...
	I1002 19:22:34.531986  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-075364", mac: "52:54:00:21:3a:95", ip: "192.168.72.204"} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:34.532020  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | skip adding static IP to network mk-default-k8s-diff-port-075364 - found existing host DHCP lease matching {name: "default-k8s-diff-port-075364", mac: "52:54:00:21:3a:95", ip: "192.168.72.204"}
	I1002 19:22:34.532039  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | Getting to WaitForSSH function...
	I1002 19:22:34.532063  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Reserved static IP address: 192.168.72.204
	I1002 19:22:34.532073  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Waiting for SSH to be available...
	I1002 19:22:34.534773  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.535136  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:34.535168  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.535316  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | Using SSH client type: external
	I1002 19:22:34.535336  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | Using SSH private key: /home/jenkins/minikube-integration/17339-126802/.minikube/machines/default-k8s-diff-port-075364/id_rsa (-rw-------)
	I1002 19:22:34.535391  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.204 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17339-126802/.minikube/machines/default-k8s-diff-port-075364/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1002 19:22:34.535416  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | About to run SSH command:
	I1002 19:22:34.535434  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | exit 0
	I1002 19:22:34.641801  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | SSH cmd err, output: <nil>: 
	I1002 19:22:34.642224  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetConfigRaw
	I1002 19:22:34.642906  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetIP
	I1002 19:22:34.645907  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.646352  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:34.646397  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.646707  181025 profile.go:148] Saving config to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/config.json ...
	I1002 19:22:34.646926  181025 machine.go:88] provisioning docker machine ...
	I1002 19:22:34.646949  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	I1002 19:22:34.647181  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetMachineName
	I1002 19:22:34.647430  181025 buildroot.go:166] provisioning hostname "default-k8s-diff-port-075364"
	I1002 19:22:34.647456  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetMachineName
	I1002 19:22:34.647642  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:34.650099  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.650490  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:34.650532  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.650678  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:34.650879  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:34.651054  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:34.651217  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:34.651383  181025 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:34.651780  181025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I1002 19:22:34.651803  181025 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-075364 && echo "default-k8s-diff-port-075364" | sudo tee /etc/hostname
	I1002 19:22:34.797307  181025 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-075364
	
	I1002 19:22:34.797341  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:34.800408  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.800845  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:34.800870  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.801169  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:34.801405  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:34.801605  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:34.801718  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:34.801894  181025 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:34.802227  181025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I1002 19:22:34.802256  181025 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-075364' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-075364/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-075364' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1002 19:22:34.943285  181025 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1002 19:22:34.943322  181025 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17339-126802/.minikube CaCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17339-126802/.minikube}
	I1002 19:22:34.943353  181025 buildroot.go:174] setting up certificates
	I1002 19:22:34.943367  181025 provision.go:83] configureAuth start
	I1002 19:22:34.943381  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetMachineName
	I1002 19:22:34.943758  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetIP
	I1002 19:22:34.946653  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.947062  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:34.947095  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.947291  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:34.949674  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.950082  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:34.950118  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:34.950297  181025 provision.go:138] copyHostCerts
	I1002 19:22:34.950385  181025 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem, removing ...
	I1002 19:22:34.950401  181025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem
	I1002 19:22:34.950477  181025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/ca.pem (1082 bytes)
	I1002 19:22:34.950668  181025 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem, removing ...
	I1002 19:22:34.950681  181025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem
	I1002 19:22:34.950707  181025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/cert.pem (1123 bytes)
	I1002 19:22:34.950770  181025 exec_runner.go:144] found /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem, removing ...
	I1002 19:22:34.950777  181025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem
	I1002 19:22:34.950794  181025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17339-126802/.minikube/key.pem (1679 bytes)
	I1002 19:22:34.950860  181025 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-075364 san=[192.168.72.204 192.168.72.204 localhost 127.0.0.1 minikube default-k8s-diff-port-075364]
	I1002 19:22:35.123184  181025 provision.go:172] copyRemoteCerts
	I1002 19:22:35.123246  181025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1002 19:22:35.123272  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:35.126466  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:35.126763  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:35.126802  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:35.126987  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:35.127191  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:35.127346  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:35.127471  181025 sshutil.go:53] new ssh client: &{IP:192.168.72.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/default-k8s-diff-port-075364/id_rsa Username:docker}
	I1002 19:22:35.221649  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1002 19:22:35.243873  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1002 19:22:35.266633  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1002 19:22:35.289419  181025 provision.go:86] duration metric: configureAuth took 346.035149ms
	I1002 19:22:35.289456  181025 buildroot.go:189] setting minikube options for container-runtime
	I1002 19:22:35.289700  181025 config.go:182] Loaded profile config "default-k8s-diff-port-075364": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 19:22:35.289740  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	I1002 19:22:35.290056  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:35.292931  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:35.293293  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:35.293320  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:35.293469  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:35.293680  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:35.293834  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:35.293935  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:35.294074  181025 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:35.294467  181025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I1002 19:22:35.294488  181025 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1002 19:22:35.430736  181025 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1002 19:22:35.430769  181025 buildroot.go:70] root file system type: tmpfs
	I1002 19:22:35.430896  181025 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1002 19:22:35.430933  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:35.433846  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:35.434296  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:35.434337  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:35.434541  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:35.434784  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:35.435026  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:35.435147  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:35.435279  181025 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:35.435728  181025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I1002 19:22:35.435851  181025 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1002 19:22:35.577764  181025 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1002 19:22:35.577795  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:35.580880  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:35.581273  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:35.581311  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:35.581479  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:35.581737  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:35.581943  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:35.582089  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:35.582267  181025 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:35.582575  181025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I1002 19:22:35.582594  181025 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1002 19:22:36.442208  181025 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1002 19:22:36.442246  181025 machine.go:91] provisioned docker machine in 1.795304501s
	I1002 19:22:36.442257  181025 start.go:300] post-start starting for "default-k8s-diff-port-075364" (driver="kvm2")
	I1002 19:22:36.442270  181025 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1002 19:22:36.442289  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	I1002 19:22:36.442628  181025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1002 19:22:36.442656  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:36.445728  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.446051  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:36.446086  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.446231  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:36.446443  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:36.446643  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:36.446801  181025 sshutil.go:53] new ssh client: &{IP:192.168.72.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/default-k8s-diff-port-075364/id_rsa Username:docker}
	I1002 19:22:36.542676  181025 ssh_runner.go:195] Run: cat /etc/os-release
	I1002 19:22:36.546838  181025 info.go:137] Remote host: Buildroot 2021.02.12
	I1002 19:22:36.546868  181025 filesync.go:126] Scanning /home/jenkins/minikube-integration/17339-126802/.minikube/addons for local assets ...
	I1002 19:22:36.546939  181025 filesync.go:126] Scanning /home/jenkins/minikube-integration/17339-126802/.minikube/files for local assets ...
	I1002 19:22:36.547008  181025 filesync.go:149] local asset: /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem -> 1340252.pem in /etc/ssl/certs
	I1002 19:22:36.547113  181025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1002 19:22:36.555192  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem --> /etc/ssl/certs/1340252.pem (1708 bytes)
	I1002 19:22:36.578297  181025 start.go:303] post-start completed in 136.022163ms
	I1002 19:22:36.578333  181025 fix.go:56] fixHost completed within 24.272188808s
	I1002 19:22:36.578362  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:36.581002  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.581367  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:36.581414  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.581535  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:36.581806  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:36.581988  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:36.582177  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:36.582352  181025 main.go:141] libmachine: Using SSH client type: native
	I1002 19:22:36.582660  181025 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x7f56e0] 0x7f83c0 <nil>  [] 0s} 192.168.72.204 22 <nil> <nil>}
	I1002 19:22:36.582671  181025 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1002 19:22:36.714132  181025 main.go:141] libmachine: SSH cmd err, output: <nil>: 1696274556.657553406
	
	I1002 19:22:36.714161  181025 fix.go:206] guest clock: 1696274556.657553406
	I1002 19:22:36.714171  181025 fix.go:219] Guest: 2023-10-02 19:22:36.657553406 +0000 UTC Remote: 2023-10-02 19:22:36.578338945 +0000 UTC m=+39.644245481 (delta=79.214461ms)
	I1002 19:22:36.714223  181025 fix.go:190] guest clock delta is within tolerance: 79.214461ms
	I1002 19:22:36.714228  181025 start.go:83] releasing machines lock for "default-k8s-diff-port-075364", held for 24.408129672s
	I1002 19:22:36.714260  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	I1002 19:22:36.714533  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetIP
	I1002 19:22:36.717724  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.718061  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:36.718102  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.718215  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	I1002 19:22:36.718737  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	I1002 19:22:36.718949  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .DriverName
	I1002 19:22:36.719050  181025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1002 19:22:36.719106  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:36.719217  181025 ssh_runner.go:195] Run: cat /version.json
	I1002 19:22:36.719241  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHHostname
	I1002 19:22:36.722001  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.722104  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.722480  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:36.722535  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:36.722567  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.722618  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:36.722730  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:36.722879  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHPort
	I1002 19:22:36.722962  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:36.723060  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHKeyPath
	I1002 19:22:36.723132  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:36.723213  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetSSHUsername
	I1002 19:22:36.723293  181025 sshutil.go:53] new ssh client: &{IP:192.168.72.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/default-k8s-diff-port-075364/id_rsa Username:docker}
	I1002 19:22:36.723380  181025 sshutil.go:53] new ssh client: &{IP:192.168.72.204 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/default-k8s-diff-port-075364/id_rsa Username:docker}
	I1002 19:22:36.818380  181025 ssh_runner.go:195] Run: systemctl --version
	I1002 19:22:36.860594  181025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1002 19:22:36.866854  181025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1002 19:22:36.866938  181025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1002 19:22:36.883892  181025 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1002 19:22:36.883926  181025 start.go:469] detecting cgroup driver to use...
	I1002 19:22:36.884075  181025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:22:36.906907  181025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1002 19:22:36.920731  181025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1002 19:22:36.930805  181025 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1002 19:22:36.930922  181025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1002 19:22:36.941755  181025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:22:36.951393  181025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1002 19:22:36.964226  181025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1002 19:22:36.716647  181123 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1002 19:22:36.716822  181123 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 19:22:36.716870  181123 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 19:22:36.733511  181123 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34487
	I1002 19:22:36.734006  181123 main.go:141] libmachine: () Calling .GetVersion
	I1002 19:22:36.734580  181123 main.go:141] libmachine: Using API Version  1
	I1002 19:22:36.734603  181123 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 19:22:36.734968  181123 main.go:141] libmachine: () Calling .GetMachineName
	I1002 19:22:36.735204  181123 main.go:141] libmachine: (newest-cni-962509) Calling .GetMachineName
	I1002 19:22:36.735355  181123 main.go:141] libmachine: (newest-cni-962509) Calling .DriverName
	I1002 19:22:36.735501  181123 start.go:159] libmachine.API.Create for "newest-cni-962509" (driver="kvm2")
	I1002 19:22:36.735529  181123 client.go:168] LocalClient.Create starting
	I1002 19:22:36.735573  181123 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem
	I1002 19:22:36.735617  181123 main.go:141] libmachine: Decoding PEM data...
	I1002 19:22:36.735634  181123 main.go:141] libmachine: Parsing certificate...
	I1002 19:22:36.735690  181123 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem
	I1002 19:22:36.735708  181123 main.go:141] libmachine: Decoding PEM data...
	I1002 19:22:36.735719  181123 main.go:141] libmachine: Parsing certificate...
	I1002 19:22:36.735735  181123 main.go:141] libmachine: Running pre-create checks...
	I1002 19:22:36.735746  181123 main.go:141] libmachine: (newest-cni-962509) Calling .PreCreateCheck
	I1002 19:22:36.736130  181123 main.go:141] libmachine: (newest-cni-962509) Calling .GetConfigRaw
	I1002 19:22:36.736560  181123 main.go:141] libmachine: Creating machine...
	I1002 19:22:36.736580  181123 main.go:141] libmachine: (newest-cni-962509) Calling .Create
	I1002 19:22:36.736750  181123 main.go:141] libmachine: (newest-cni-962509) Creating KVM machine...
	I1002 19:22:36.738137  181123 main.go:141] libmachine: (newest-cni-962509) DBG | found existing default KVM network
	I1002 19:22:36.739550  181123 main.go:141] libmachine: (newest-cni-962509) DBG | I1002 19:22:36.739357  181532 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:7b:e9:f3} reservation:<nil>}
	I1002 19:22:36.740757  181123 main.go:141] libmachine: (newest-cni-962509) DBG | I1002 19:22:36.740654  181532 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000256800}
	I1002 19:22:36.746517  181123 main.go:141] libmachine: (newest-cni-962509) DBG | trying to create private KVM network mk-newest-cni-962509 192.168.50.0/24...
	I1002 19:22:36.828283  181123 main.go:141] libmachine: (newest-cni-962509) DBG | private KVM network mk-newest-cni-962509 192.168.50.0/24 created
	I1002 19:22:36.828322  181123 main.go:141] libmachine: (newest-cni-962509) Setting up store path in /home/jenkins/minikube-integration/17339-126802/.minikube/machines/newest-cni-962509 ...
	I1002 19:22:36.828341  181123 main.go:141] libmachine: (newest-cni-962509) DBG | I1002 19:22:36.828274  181532 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17339-126802/.minikube
	I1002 19:22:36.828361  181123 main.go:141] libmachine: (newest-cni-962509) Building disk image from file:///home/jenkins/minikube-integration/17339-126802/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 19:22:36.828494  181123 main.go:141] libmachine: (newest-cni-962509) Downloading /home/jenkins/minikube-integration/17339-126802/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17339-126802/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso...
	I1002 19:22:37.087104  181123 main.go:141] libmachine: (newest-cni-962509) DBG | I1002 19:22:37.086946  181532 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17339-126802/.minikube/machines/newest-cni-962509/id_rsa...
	I1002 19:22:36.975393  181025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1002 19:22:36.986780  181025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1002 19:22:36.997291  181025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1002 19:22:37.007925  181025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1002 19:22:37.017904  181025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:22:37.125265  181025 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1002 19:22:37.143011  181025 start.go:469] detecting cgroup driver to use...
	I1002 19:22:37.143114  181025 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1002 19:22:37.159175  181025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:22:37.174374  181025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1002 19:22:37.197176  181025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1002 19:22:37.210952  181025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:22:37.224552  181025 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1002 19:22:37.253035  181025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1002 19:22:37.268461  181025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1002 19:22:37.287020  181025 ssh_runner.go:195] Run: which cri-dockerd
	I1002 19:22:37.291312  181025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1002 19:22:37.300220  181025 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1002 19:22:37.315309  181025 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1002 19:22:37.416499  181025 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1002 19:22:37.537203  181025 docker.go:554] configuring docker to use "cgroupfs" as cgroup driver...
	I1002 19:22:37.537362  181025 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1002 19:22:37.556084  181025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:22:37.656740  181025 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1002 19:22:39.161937  181025 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.505152893s)
	I1002 19:22:39.162020  181025 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:22:39.273498  181025 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1002 19:22:39.392556  181025 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1002 19:22:39.509084  181025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:22:39.628408  181025 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1002 19:22:39.647924  181025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1002 19:22:39.759687  181025 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1002 19:22:39.853163  181025 start.go:516] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1002 19:22:39.853254  181025 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1002 19:22:39.859777  181025 start.go:537] Will wait 60s for crictl version
	I1002 19:22:39.859847  181025 ssh_runner.go:195] Run: which crictl
	I1002 19:22:39.864090  181025 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1002 19:22:39.926662  181025 start.go:553] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.6
	RuntimeApiVersion:  v1
	I1002 19:22:39.926731  181025 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:22:39.953401  181025 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1002 19:22:39.982562  181025 out.go:204] * Preparing Kubernetes v1.28.2 on Docker 24.0.6 ...
	I1002 19:22:39.982650  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) Calling .GetIP
	I1002 19:22:39.985637  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:39.986029  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:21:3a:95", ip: ""} in network mk-default-k8s-diff-port-075364: {Iface:virbr4 ExpiryTime:2023-10-02 20:20:29 +0000 UTC Type:0 Mac:52:54:00:21:3a:95 Iaid: IPaddr:192.168.72.204 Prefix:24 Hostname:default-k8s-diff-port-075364 Clientid:01:52:54:00:21:3a:95}
	I1002 19:22:39.986058  181025 main.go:141] libmachine: (default-k8s-diff-port-075364) DBG | domain default-k8s-diff-port-075364 has defined IP address 192.168.72.204 and MAC address 52:54:00:21:3a:95 in network mk-default-k8s-diff-port-075364
	I1002 19:22:39.986272  181025 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1002 19:22:39.990740  181025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:22:40.004138  181025 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 19:22:40.004212  181025 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:22:40.023796  181025 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1002 19:22:40.023831  181025 docker.go:594] Images already preloaded, skipping extraction
	I1002 19:22:40.023908  181025 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1002 19:22:40.050554  181025 docker.go:664] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/gvisor-addon:2
	registry.k8s.io/kube-apiserver:v1.28.2
	registry.k8s.io/kube-scheduler:v1.28.2
	registry.k8s.io/kube-controller-manager:v1.28.2
	registry.k8s.io/kube-proxy:v1.28.2
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I1002 19:22:40.050586  181025 cache_images.go:84] Images are preloaded, skipping loading
	I1002 19:22:40.050661  181025 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1002 19:22:40.082490  181025 cni.go:84] Creating CNI manager for ""
	I1002 19:22:40.082522  181025 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 19:22:40.082546  181025 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1002 19:22:40.082569  181025 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.204 APIServerPort:8444 KubernetesVersion:v1.28.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-075364 NodeName:default-k8s-diff-port-075364 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.204"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.204 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1002 19:22:40.082734  181025 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.204
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-075364"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.204
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.204"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1002 19:22:40.082910  181025 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-075364 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.204
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.2 ClusterName:default-k8s-diff-port-075364 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1002 19:22:40.082986  181025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.2
	I1002 19:22:40.093029  181025 binaries.go:44] Found k8s binaries, skipping transfer
	I1002 19:22:40.093113  181025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1002 19:22:40.101986  181025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (391 bytes)
	I1002 19:22:40.119922  181025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1002 19:22:40.138205  181025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2117 bytes)
	I1002 19:22:40.157113  181025 ssh_runner.go:195] Run: grep 192.168.72.204	control-plane.minikube.internal$ /etc/hosts
	I1002 19:22:40.161207  181025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.204	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1002 19:22:40.175297  181025 certs.go:56] Setting up /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364 for IP: 192.168.72.204
	I1002 19:22:40.175332  181025 certs.go:190] acquiring lock for shared ca certs: {Name:mk1bad5dcf25e4f2ff7c547e39403ca2e6e2656c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:22:40.175514  181025 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17339-126802/.minikube/ca.key
	I1002 19:22:40.175577  181025 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17339-126802/.minikube/proxy-client-ca.key
	I1002 19:22:40.175689  181025 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/client.key
	I1002 19:22:40.175770  181025 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/apiserver.key.00048be2
	I1002 19:22:40.175828  181025 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/proxy-client.key
	I1002 19:22:40.175982  181025 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/134025.pem (1338 bytes)
	W1002 19:22:40.176023  181025 certs.go:433] ignoring /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/134025_empty.pem, impossibly tiny 0 bytes
	I1002 19:22:40.176040  181025 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca-key.pem (1679 bytes)
	I1002 19:22:40.176095  181025 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/ca.pem (1082 bytes)
	I1002 19:22:40.176153  181025 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/cert.pem (1123 bytes)
	I1002 19:22:40.176185  181025 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/certs/home/jenkins/minikube-integration/17339-126802/.minikube/certs/key.pem (1679 bytes)
	I1002 19:22:40.176241  181025 certs.go:437] found cert: /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem (1708 bytes)
	I1002 19:22:40.176980  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1002 19:22:40.201549  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1002 19:22:40.224571  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1002 19:22:40.249936  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/default-k8s-diff-port-075364/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1002 19:22:40.276045  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1002 19:22:40.298220  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1002 19:22:40.322793  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1002 19:22:40.350754  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1002 19:22:40.380227  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1002 19:22:40.406805  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/certs/134025.pem --> /usr/share/ca-certificates/134025.pem (1338 bytes)
	I1002 19:22:40.434260  181025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/ssl/certs/1340252.pem --> /usr/share/ca-certificates/1340252.pem (1708 bytes)
	I1002 19:22:40.459705  181025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1002 19:22:40.478909  181025 ssh_runner.go:195] Run: openssl version
	I1002 19:22:40.484887  181025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1002 19:22:40.495493  181025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:22:40.501798  181025 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct  2 18:24 /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:22:40.501883  181025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1002 19:22:40.509069  181025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1002 19:22:40.522581  181025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/134025.pem && ln -fs /usr/share/ca-certificates/134025.pem /etc/ssl/certs/134025.pem"
	I1002 19:22:40.533278  181025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/134025.pem
	I1002 19:22:40.538511  181025 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct  2 18:29 /usr/share/ca-certificates/134025.pem
	I1002 19:22:40.538591  181025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/134025.pem
	I1002 19:22:40.544666  181025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/134025.pem /etc/ssl/certs/51391683.0"
	I1002 19:22:40.554300  181025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1340252.pem && ln -fs /usr/share/ca-certificates/1340252.pem /etc/ssl/certs/1340252.pem"
	I1002 19:22:40.564406  181025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1340252.pem
	I1002 19:22:40.569273  181025 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct  2 18:29 /usr/share/ca-certificates/1340252.pem
	I1002 19:22:40.569339  181025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1340252.pem
	I1002 19:22:40.575142  181025 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1340252.pem /etc/ssl/certs/3ec20f2e.0"
	I1002 19:22:40.584500  181025 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1002 19:22:40.589083  181025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1002 19:22:40.594852  181025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1002 19:22:40.600833  181025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1002 19:22:40.607855  181025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1002 19:22:40.614946  181025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1002 19:22:40.622053  181025 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1002 19:22:40.629092  181025 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-075364 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.2 ClusterName:default-k8s-diff-port-075364 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.72.204 Port:8444 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts
:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 19:22:40.629247  181025 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1002 19:22:40.652065  181025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1002 19:22:40.663154  181025 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1002 19:22:40.663176  181025 kubeadm.go:636] restartCluster start
	I1002 19:22:40.663242  181025 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1002 19:22:40.672629  181025 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:22:40.673316  181025 kubeconfig.go:135] verify returned: extract IP: "default-k8s-diff-port-075364" does not appear in /home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 19:22:40.673595  181025 kubeconfig.go:146] "default-k8s-diff-port-075364" context is missing from /home/jenkins/minikube-integration/17339-126802/kubeconfig - will repair!
	I1002 19:22:40.674072  181025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17339-126802/kubeconfig: {Name:mkd33d0e053964abc5732337034b1577498b626d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1002 19:22:40.675671  181025 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1002 19:22:40.684319  181025 api_server.go:166] Checking apiserver status ...
	I1002 19:22:40.684390  181025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 19:22:40.695415  181025 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:22:40.695441  181025 api_server.go:166] Checking apiserver status ...
	I1002 19:22:40.695500  181025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 19:22:40.707763  181025 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:22:41.208462  181025 api_server.go:166] Checking apiserver status ...
	I1002 19:22:41.208566  181025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 19:22:41.220020  181025 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1002 19:22:41.708644  181025 api_server.go:166] Checking apiserver status ...
	I1002 19:22:41.708741  181025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1002 19:22:41.721042  181025 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	
	* 
	* ==> Docker <==
	* -- Journal begins at Mon 2023-10-02 19:15:25 UTC, ends at Mon 2023-10-02 19:22:42 UTC. --
	Oct 02 19:21:49 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:21:49.588370563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 19:21:49 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:21:49.588450975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:21:50 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:21:50.014550810Z" level=info msg="shim disconnected" id=a134209b560fbfb274794ad5d6752ce4503c27115d06b45a5ef82ff6353b2c64 namespace=moby
	Oct 02 19:21:50 old-k8s-version-695840 dockerd[1084]: time="2023-10-02T19:21:50.015179823Z" level=info msg="ignoring event" container=a134209b560fbfb274794ad5d6752ce4503c27115d06b45a5ef82ff6353b2c64 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 19:21:50 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:21:50.015480946Z" level=warning msg="cleaning up after shim disconnected" id=a134209b560fbfb274794ad5d6752ce4503c27115d06b45a5ef82ff6353b2c64 namespace=moby
	Oct 02 19:21:50 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:21:50.015499487Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 19:22:11 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:11.357898974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 19:22:11 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:11.358080379Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:22:11 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:11.358103500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 19:22:11 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:11.358113483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:22:11 old-k8s-version-695840 dockerd[1084]: time="2023-10-02T19:22:11.835779842Z" level=info msg="ignoring event" container=8e8852a0b5c1fd8b17a740f23711d9b170ed9216b7cce14011f1f831f4b16205 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 19:22:11 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:11.837002502Z" level=info msg="shim disconnected" id=8e8852a0b5c1fd8b17a740f23711d9b170ed9216b7cce14011f1f831f4b16205 namespace=moby
	Oct 02 19:22:11 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:11.837133957Z" level=warning msg="cleaning up after shim disconnected" id=8e8852a0b5c1fd8b17a740f23711d9b170ed9216b7cce14011f1f831f4b16205 namespace=moby
	Oct 02 19:22:11 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:11.837150754Z" level=info msg="cleaning up dead shim" namespace=moby
	Oct 02 19:22:13 old-k8s-version-695840 dockerd[1084]: time="2023-10-02T19:22:13.275021256Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 19:22:13 old-k8s-version-695840 dockerd[1084]: time="2023-10-02T19:22:13.275073996Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 19:22:13 old-k8s-version-695840 dockerd[1084]: time="2023-10-02T19:22:13.278005901Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 19:22:32 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:32.354666200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Oct 02 19:22:32 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:32.355468099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:22:32 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:32.355833339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Oct 02 19:22:32 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:32.355854960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Oct 02 19:22:32 old-k8s-version-695840 dockerd[1084]: time="2023-10-02T19:22:32.763275741Z" level=info msg="ignoring event" container=152415d7061201a4b83c8a1daf8425e25b9dc4cf5483a910807daffe29efb1f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Oct 02 19:22:32 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:32.765694326Z" level=info msg="shim disconnected" id=152415d7061201a4b83c8a1daf8425e25b9dc4cf5483a910807daffe29efb1f7 namespace=moby
	Oct 02 19:22:32 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:32.765787940Z" level=warning msg="cleaning up after shim disconnected" id=152415d7061201a4b83c8a1daf8425e25b9dc4cf5483a910807daffe29efb1f7 namespace=moby
	Oct 02 19:22:32 old-k8s-version-695840 dockerd[1090]: time="2023-10-02T19:22:32.765800018Z" level=info msg="cleaning up dead shim" namespace=moby
	
	* 
	* ==> container status <==
	* time="2023-10-02T19:22:42Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                     PORTS     NAMES
	152415d70612   a90209bb39e3             "nginx -g 'daemon of…"   10 seconds ago       Exited (1) 9 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard_16ad1247-1ac3-4f55-83bc-86b0f2170688_3
	b669bf42eedf   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up 59 seconds                        k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-7j8g9_kubernetes-dashboard_21bbe5be-45d4-47e5-a886-f9b8c6e63ebf_0
	e99e2a901965   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kubernetes-dashboard-84b68f675b-7j8g9_kubernetes-dashboard_21bbe5be-45d4-47e5-a886-f9b8c6e63ebf_0
	2856aa6dbf15   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard_16ad1247-1ac3-4f55-83bc-86b0f2170688_0
	e817b4e5bf93   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_metrics-server-74d5856cc6-fjpwr_kube-system_2b32ffb1-a767-4f21-bec7-21cdc20f6af6_0
	f1c1ffcebd8a   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                    k8s_storage-provisioner_storage-provisioner_kube-system_636174e3-a913-4389-9e84-17569a9587bd_0
	c7a50e4956da   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_storage-provisioner_kube-system_636174e3-a913-4389-9e84-17569a9587bd_0
	f059ef7c3812   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                    k8s_coredns_coredns-5644d7b6d9-7vpzf_kube-system_cc95497a-3665-4e21-b63b-408a8f7f0766_0
	5e293557ef00   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                    k8s_coredns_coredns-5644d7b6d9-fds62_kube-system_d177e14e-9a63-4e17-8c35-c8a3ce2dcdfd_0
	30f813907d9c   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                    k8s_kube-proxy_kube-proxy-hh4zl_kube-system_6c68f0ca-3cd1-4ec9-87f7-2a8e90ff96aa_0
	86a1c47178cf   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_coredns-5644d7b6d9-7vpzf_kube-system_cc95497a-3665-4e21-b63b-408a8f7f0766_0
	58c71d93b143   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_coredns-5644d7b6d9-fds62_kube-system_d177e14e-9a63-4e17-8c35-c8a3ce2dcdfd_0
	dfb7bff6d5c0   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-proxy-hh4zl_kube-system_6c68f0ca-3cd1-4ec9-87f7-2a8e90ff96aa_0
	300ceb8e7947   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                    k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-695840_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	65c592bd727a   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                    k8s_kube-apiserver_kube-apiserver-old-k8s-version-695840_kube-system_432d1923390f3fa89e0cf0f6d1377e54_0
	8df8e5ac871c   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                    k8s_kube-scheduler_kube-scheduler-old-k8s-version-695840_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	fba4a5ea4b44   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                    k8s_etcd_etcd-old-k8s-version-695840_kube-system_6b165801bb8094e996d54001edd709ec_0
	d00206bfe70a   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-scheduler-old-k8s-version-695840_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	3dfca27b0d2b   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-controller-manager-old-k8s-version-695840_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	11917d105cfc   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_kube-apiserver-old-k8s-version-695840_kube-system_432d1923390f3fa89e0cf0f6d1377e54_0
	6de9423bca17   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                    k8s_POD_etcd-old-k8s-version-695840_kube-system_6b165801bb8094e996d54001edd709ec_0
	
	* 
	* ==> coredns [5e293557ef00] <==
	* .:53
	2023-10-02T19:21:29.273Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-02T19:21:29.273Z [INFO] CoreDNS-1.6.2
	2023-10-02T19:21:29.273Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-02T19:22:04.147Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	
	* 
	* ==> coredns [f059ef7c3812] <==
	* .:53
	2023-10-02T19:21:29.376Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-10-02T19:21:29.376Z [INFO] CoreDNS-1.6.2
	2023-10-02T19:21:29.376Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-10-02T19:21:52.257Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	[INFO] Reloading complete
	2023-10-02T19:21:52.272Z [INFO] 127.0.0.1:54797 - 2694 "HINFO IN 1090729787249199698.2656072811179968814. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013389433s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-695840
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-695840
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6c2ebb26865766fa20fdbd85c10f892656979721
	                    minikube.k8s.io/name=old-k8s-version-695840
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_10_02T19_21_11_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 02 Oct 2023 19:21:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 02 Oct 2023 19:22:06 +0000   Mon, 02 Oct 2023 19:21:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 02 Oct 2023 19:22:06 +0000   Mon, 02 Oct 2023 19:21:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 02 Oct 2023 19:22:06 +0000   Mon, 02 Oct 2023 19:21:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 02 Oct 2023 19:22:06 +0000   Mon, 02 Oct 2023 19:21:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.125
	  Hostname:    old-k8s-version-695840
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 c691029d1f54416383a3c044d3f85c1d
	 System UUID:                c691029d-1f54-4163-83a3-c044d3f85c1d
	 Boot ID:                    e926dbe5-186c-4f05-822e-faf41516b21c
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.6
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (11 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-7vpzf                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     76s
	  kube-system                coredns-5644d7b6d9-fds62                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     76s
	  kube-system                etcd-old-k8s-version-695840                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         35s
	  kube-system                kube-apiserver-old-k8s-version-695840             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27s
	  kube-system                kube-controller-manager-old-k8s-version-695840    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  kube-system                kube-proxy-hh4zl                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         77s
	  kube-system                kube-scheduler-old-k8s-version-695840             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  kube-system                metrics-server-74d5856cc6-fjpwr                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         72s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         74s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-ds2fk         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-7j8g9             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             340Mi (16%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  103s (x8 over 103s)  kubelet, old-k8s-version-695840     Node old-k8s-version-695840 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    103s (x8 over 103s)  kubelet, old-k8s-version-695840     Node old-k8s-version-695840 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     103s (x7 over 103s)  kubelet, old-k8s-version-695840     Node old-k8s-version-695840 status is now: NodeHasSufficientPID
	  Normal  Starting                 74s                  kube-proxy, old-k8s-version-695840  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.082854] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.476389] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.807772] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141121] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.418517] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.872211] systemd-fstab-generator[513]: Ignoring "noauto" for root device
	[  +0.111011] systemd-fstab-generator[524]: Ignoring "noauto" for root device
	[  +1.285339] systemd-fstab-generator[792]: Ignoring "noauto" for root device
	[  +0.378353] systemd-fstab-generator[831]: Ignoring "noauto" for root device
	[  +0.132242] systemd-fstab-generator[842]: Ignoring "noauto" for root device
	[  +0.186406] systemd-fstab-generator[855]: Ignoring "noauto" for root device
	[  +6.185486] systemd-fstab-generator[1075]: Ignoring "noauto" for root device
	[  +3.450422] kauditd_printk_skb: 67 callbacks suppressed
	[ +13.506132] systemd-fstab-generator[1492]: Ignoring "noauto" for root device
	[  +0.488986] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.193373] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct 2 19:16] kauditd_printk_skb: 7 callbacks suppressed
	[Oct 2 19:20] systemd-fstab-generator[5534]: Ignoring "noauto" for root device
	[Oct 2 19:21] hrtimer: interrupt took 12357223 ns
	[ +15.612916] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [fba4a5ea4b44] <==
	* 2023-10-02 19:21:01.422331 I | raft: raft.node: f4d3edba9e42b28c elected leader f4d3edba9e42b28c at term 2
	2023-10-02 19:21:01.430656 I | etcdserver: setting up the initial cluster version to 3.3
	2023-10-02 19:21:01.476761 I | etcdserver: published {Name:old-k8s-version-695840 ClientURLs:[https://192.168.39.125:2379]} to cluster 9838e9e2cfdaeabf
	2023-10-02 19:21:01.476804 I | embed: ready to serve client requests
	2023-10-02 19:21:01.622308 I | embed: serving client requests on 127.0.0.1:2379
	2023-10-02 19:21:01.769588 I | embed: ready to serve client requests
	2023-10-02 19:21:01.797728 I | embed: serving client requests on 192.168.39.125:2379
	2023-10-02 19:21:01.982883 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-10-02 19:21:01.988765 I | etcdserver/api: enabled capabilities for version 3.3
	2023-10-02 19:21:09.365838 W | etcdserver: read-only range request "key:\"/registry/roles/kube-system/system:controller:bootstrap-signer\" " with result "range_response_count:0 size:5" took too long (221.688817ms) to execute
	2023-10-02 19:21:09.367009 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-695840.178a60a2318fb02d\" " with result "range_response_count:0 size:5" took too long (203.153198ms) to execute
	2023-10-02 19:21:09.367701 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (196.003337ms) to execute
	2023-10-02 19:21:22.707787 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/service-account-controller\" " with result "range_response_count:1 size:220" took too long (109.912192ms) to execute
	2023-10-02 19:21:22.708342 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (106.510749ms) to execute
	2023-10-02 19:21:29.021003 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/metrics-server\" " with result "range_response_count:1 size:535" took too long (110.20494ms) to execute
	2023-10-02 19:21:29.138282 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:1 size:2737" took too long (246.904021ms) to execute
	2023-10-02 19:21:29.611941 W | etcdserver: read-only range request "key:\"/registry/namespaces/kubernetes-dashboard\" " with result "range_response_count:1 size:547" took too long (110.989207ms) to execute
	2023-10-02 19:21:29.612477 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (111.573455ms) to execute
	2023-10-02 19:21:30.062693 W | etcdserver: read-only range request "key:\"/registry/apiregistration.k8s.io/apiservices/v1beta1.metrics.k8s.io\" " with result "range_response_count:1 size:1425" took too long (159.981449ms) to execute
	2023-10-02 19:21:30.067085 W | etcdserver: read-only range request "key:\"/registry/services/specs/default/kubernetes\" " with result "range_response_count:1 size:293" took too long (163.805514ms) to execute
	2023-10-02 19:21:30.068237 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (164.972414ms) to execute
	2023-10-02 19:21:30.071232 W | etcdserver: read-only range request "key:\"/registry/roles/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (168.241158ms) to execute
	2023-10-02 19:21:30.123296 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:504" took too long (139.460299ms) to execute
	2023-10-02 19:21:30.142756 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kubernetes-dashboard/kubernetes-dashboard\" " with result "range_response_count:0 size:5" took too long (184.869297ms) to execute
	2023-10-02 19:22:17.146392 W | etcdserver: request "header:<ID:12865811006775948914 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk\" mod_revision:564 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk\" value_size:2099 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk\" > >>" with result "size:16" took too long (224.970153ms) to execute
	
	* 
	* ==> kernel <==
	*  19:22:42 up 7 min,  0 users,  load average: 1.33, 0.76, 0.32
	Linux old-k8s-version-695840 5.10.57 #1 SMP Mon Sep 18 23:12:38 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [65c592bd727a] <==
	* I1002 19:21:07.350937       1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
	I1002 19:21:07.365218       1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
	I1002 19:21:07.365345       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I1002 19:21:09.124945       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1002 19:21:09.603774       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1002 19:21:09.809812       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.39.125]
	I1002 19:21:09.811126       1 controller.go:606] quota admission added evaluator for: endpoints
	I1002 19:21:09.822583       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I1002 19:21:10.645214       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1002 19:21:11.220452       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1002 19:21:11.449867       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1002 19:21:25.856243       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1002 19:21:25.976342       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1002 19:21:26.058879       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	E1002 19:21:30.114339       1 available_controller.go:416] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	I1002 19:21:30.959331       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 19:21:30.959796       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 19:21:30.960262       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 19:21:30.960363       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1002 19:22:30.961031       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1002 19:22:30.961135       1 handler_proxy.go:99] no RequestInfo found in the context
	E1002 19:22:30.961225       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1002 19:22:30.961262       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [300ceb8e7947] <==
	* E1002 19:21:30.077192       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.078414       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"b53fdd58-fa4c-4b27-b6ef-9b76e7b31b20", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.111868       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"8351051e-402f-4254-af40-430f923cae90", APIVersion:"apps/v1", ResourceVersion:"372", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-fjpwr
	E1002 19:21:30.148064       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.148382       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"1dffd4ee-276d-4928-8503-440cbd903622", APIVersion:"apps/v1", ResourceVersion:"410", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.148443       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"b53fdd58-fa4c-4b27-b6ef-9b76e7b31b20", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 19:21:30.197492       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 19:21:30.198140       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.200895       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"b53fdd58-fa4c-4b27-b6ef-9b76e7b31b20", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.245027       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"b53fdd58-fa4c-4b27-b6ef-9b76e7b31b20", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 19:21:30.245066       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 19:21:30.263292       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.264123       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"1dffd4ee-276d-4928-8503-440cbd903622", APIVersion:"apps/v1", ResourceVersion:"421", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.274219       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"b53fdd58-fa4c-4b27-b6ef-9b76e7b31b20", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 19:21:30.274240       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 19:21:30.294391       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.295174       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"1dffd4ee-276d-4928-8503-440cbd903622", APIVersion:"apps/v1", ResourceVersion:"421", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1002 19:21:30.310011       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:30.310683       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"1dffd4ee-276d-4928-8503-440cbd903622", APIVersion:"apps/v1", ResourceVersion:"421", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1002 19:21:31.375252       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"b53fdd58-fa4c-4b27-b6ef-9b76e7b31b20", APIVersion:"apps/v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-ds2fk
	I1002 19:21:31.388170       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"1dffd4ee-276d-4928-8503-440cbd903622", APIVersion:"apps/v1", ResourceVersion:"421", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-7j8g9
	E1002 19:21:56.423045       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 19:21:58.353453       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1002 19:22:26.675269       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1002 19:22:30.355666       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [30f813907d9c] <==
	* W1002 19:21:28.505881       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1002 19:21:28.528995       1 node.go:135] Successfully retrieved node IP: 192.168.39.125
	I1002 19:21:28.529605       1 server_others.go:149] Using iptables Proxier.
	I1002 19:21:28.531126       1 server.go:529] Version: v1.16.0
	I1002 19:21:28.532016       1 config.go:313] Starting service config controller
	I1002 19:21:28.532143       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1002 19:21:28.532974       1 config.go:131] Starting endpoints config controller
	I1002 19:21:28.532992       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1002 19:21:28.633419       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1002 19:21:28.636909       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [8df8e5ac871c] <==
	* W1002 19:21:06.458071       1 authentication.go:79] Authentication is disabled
	I1002 19:21:06.458084       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1002 19:21:06.464208       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1002 19:21:06.511789       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 19:21:06.512120       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 19:21:06.512196       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 19:21:06.512491       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 19:21:06.513493       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 19:21:06.514008       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 19:21:06.514347       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 19:21:06.516303       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 19:21:06.516349       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 19:21:06.518600       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 19:21:06.519340       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1002 19:21:07.514809       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1002 19:21:07.518319       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1002 19:21:07.519467       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1002 19:21:07.520783       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1002 19:21:07.521948       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1002 19:21:07.523359       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1002 19:21:07.525270       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1002 19:21:07.528926       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1002 19:21:07.532076       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1002 19:21:07.532963       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1002 19:21:07.534390       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-10-02 19:15:25 UTC, ends at Mon 2023-10-02 19:22:42 UTC. --
	Oct 02 19:21:48 old-k8s-version-695840 kubelet[5540]: E1002 19:21:48.005916    5540 pod_workers.go:191] Error syncing pod 2b32ffb1-a767-4f21-bec7-21cdc20f6af6 ("metrics-server-74d5856cc6-fjpwr_kube-system(2b32ffb1-a767-4f21-bec7-21cdc20f6af6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 19:21:48 old-k8s-version-695840 kubelet[5540]: W1002 19:21:48.399280    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:21:48 old-k8s-version-695840 kubelet[5540]: W1002 19:21:48.555450    5540 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod16ad1247-1ac3-4f55-83bc-86b0f2170688/a998cdd6b0eaa865dfc27a6798fd707206343a1ec1371ab07e251f9c1b19d4d6": none of the resources are being tracked.
	Oct 02 19:21:49 old-k8s-version-695840 kubelet[5540]: W1002 19:21:49.480982    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:21:50 old-k8s-version-695840 kubelet[5540]: W1002 19:21:50.499096    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:21:50 old-k8s-version-695840 kubelet[5540]: E1002 19:21:50.512710    5540 pod_workers.go:191] Error syncing pod 16ad1247-1ac3-4f55-83bc-86b0f2170688 ("dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"
	Oct 02 19:21:51 old-k8s-version-695840 kubelet[5540]: W1002 19:21:51.520687    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:21:51 old-k8s-version-695840 kubelet[5540]: E1002 19:21:51.531301    5540 pod_workers.go:191] Error syncing pod 16ad1247-1ac3-4f55-83bc-86b0f2170688 ("dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"
	Oct 02 19:21:56 old-k8s-version-695840 kubelet[5540]: E1002 19:21:56.885315    5540 pod_workers.go:191] Error syncing pod 16ad1247-1ac3-4f55-83bc-86b0f2170688 ("dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"
	Oct 02 19:22:00 old-k8s-version-695840 kubelet[5540]: E1002 19:22:00.244277    5540 pod_workers.go:191] Error syncing pod 2b32ffb1-a767-4f21-bec7-21cdc20f6af6 ("metrics-server-74d5856cc6-fjpwr_kube-system(2b32ffb1-a767-4f21-bec7-21cdc20f6af6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 19:22:11 old-k8s-version-695840 kubelet[5540]: W1002 19:22:11.710000    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:22:12 old-k8s-version-695840 kubelet[5540]: W1002 19:22:12.819614    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:22:12 old-k8s-version-695840 kubelet[5540]: E1002 19:22:12.828916    5540 pod_workers.go:191] Error syncing pod 16ad1247-1ac3-4f55-83bc-86b0f2170688 ("dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"
	Oct 02 19:22:13 old-k8s-version-695840 kubelet[5540]: E1002 19:22:13.279033    5540 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 19:22:13 old-k8s-version-695840 kubelet[5540]: E1002 19:22:13.279166    5540 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 19:22:13 old-k8s-version-695840 kubelet[5540]: E1002 19:22:13.279282    5540 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Oct 02 19:22:13 old-k8s-version-695840 kubelet[5540]: E1002 19:22:13.279413    5540 pod_workers.go:191] Error syncing pod 2b32ffb1-a767-4f21-bec7-21cdc20f6af6 ("metrics-server-74d5856cc6-fjpwr_kube-system(2b32ffb1-a767-4f21-bec7-21cdc20f6af6)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Oct 02 19:22:13 old-k8s-version-695840 kubelet[5540]: W1002 19:22:13.839803    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:22:16 old-k8s-version-695840 kubelet[5540]: E1002 19:22:16.883503    5540 pod_workers.go:191] Error syncing pod 16ad1247-1ac3-4f55-83bc-86b0f2170688 ("dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"
	Oct 02 19:22:26 old-k8s-version-695840 kubelet[5540]: E1002 19:22:26.245151    5540 pod_workers.go:191] Error syncing pod 2b32ffb1-a767-4f21-bec7-21cdc20f6af6 ("metrics-server-74d5856cc6-fjpwr_kube-system(2b32ffb1-a767-4f21-bec7-21cdc20f6af6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Oct 02 19:22:33 old-k8s-version-695840 kubelet[5540]: W1002 19:22:33.036233    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:22:33 old-k8s-version-695840 kubelet[5540]: E1002 19:22:33.046828    5540 pod_workers.go:191] Error syncing pod 16ad1247-1ac3-4f55-83bc-86b0f2170688 ("dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"
	Oct 02 19:22:34 old-k8s-version-695840 kubelet[5540]: W1002 19:22:34.054947    5540 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-ds2fk through plugin: invalid network status for
	Oct 02 19:22:36 old-k8s-version-695840 kubelet[5540]: E1002 19:22:36.888769    5540 pod_workers.go:191] Error syncing pod 16ad1247-1ac3-4f55-83bc-86b0f2170688 ("dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-ds2fk_kubernetes-dashboard(16ad1247-1ac3-4f55-83bc-86b0f2170688)"
	Oct 02 19:22:40 old-k8s-version-695840 kubelet[5540]: E1002 19:22:40.244732    5540 pod_workers.go:191] Error syncing pod 2b32ffb1-a767-4f21-bec7-21cdc20f6af6 ("metrics-server-74d5856cc6-fjpwr_kube-system(2b32ffb1-a767-4f21-bec7-21cdc20f6af6)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [b669bf42eedf] <==
	* 2023/10/02 19:21:42 Starting overwatch
	2023/10/02 19:21:42 Using namespace: kubernetes-dashboard
	2023/10/02 19:21:42 Using in-cluster config to connect to apiserver
	2023/10/02 19:21:42 Using secret token for csrf signing
	2023/10/02 19:21:42 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/10/02 19:21:42 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/10/02 19:21:42 Successful initial request to the apiserver, version: v1.16.0
	2023/10/02 19:21:42 Generating JWE encryption key
	2023/10/02 19:21:42 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/10/02 19:21:42 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/10/02 19:21:43 Initializing JWE encryption key from synchronized object
	2023/10/02 19:21:43 Creating in-cluster Sidecar client
	2023/10/02 19:21:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/10/02 19:21:43 Serving insecurely on HTTP port: 9090
	2023/10/02 19:22:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [f1c1ffcebd8a] <==
	* I1002 19:21:29.935362       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1002 19:21:29.980279       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1002 19:21:29.980947       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1002 19:21:30.221776       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1002 19:21:30.224463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-695840_28c54b3e-cb4e-4dd5-827a-f3f353d276b5!
	I1002 19:21:30.226304       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"350e4470-0aa5-4f67-a0a8-5527525d6269", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-695840_28c54b3e-cb4e-4dd5-827a-f3f353d276b5 became leader
	I1002 19:21:30.326476       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-695840_28c54b3e-cb4e-4dd5-827a-f3f353d276b5!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-695840 -n old-k8s-version-695840
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-695840 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-fjpwr
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-695840 describe pod metrics-server-74d5856cc6-fjpwr
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-695840 describe pod metrics-server-74d5856cc6-fjpwr: exit status 1 (74.531473ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-fjpwr" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-695840 describe pod metrics-server-74d5856cc6-fjpwr: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.11s)

                                                
                                    

Test pass (280/313)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 23.61
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.28.2/json-events 15.44
11 TestDownloadOnly/v1.28.2/preload-exists 0
15 TestDownloadOnly/v1.28.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.13
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.12
19 TestBinaryMirror 0.57
20 TestOffline 71.93
22 TestAddons/Setup 150.07
24 TestAddons/parallel/Registry 16.91
25 TestAddons/parallel/Ingress 21.71
26 TestAddons/parallel/InspektorGadget 10.94
27 TestAddons/parallel/MetricsServer 5.8
28 TestAddons/parallel/HelmTiller 21.31
30 TestAddons/parallel/CSI 68.3
31 TestAddons/parallel/Headlamp 16.14
32 TestAddons/parallel/CloudSpanner 5.61
33 TestAddons/parallel/LocalPath 57.55
36 TestAddons/serial/GCPAuth/Namespaces 0.13
37 TestAddons/StoppedEnableDisable 13.36
38 TestCertOptions 66.27
39 TestCertExpiration 323.72
40 TestDockerFlags 108.43
41 TestForceSystemdFlag 84.93
42 TestForceSystemdEnv 83.84
44 TestKVMDriverInstallOrUpdate 3.65
48 TestErrorSpam/setup 51.23
49 TestErrorSpam/start 0.35
50 TestErrorSpam/status 0.79
51 TestErrorSpam/pause 1.21
52 TestErrorSpam/unpause 1.34
53 TestErrorSpam/stop 12.55
56 TestFunctional/serial/CopySyncFile 0
57 TestFunctional/serial/StartWithProxy 71.94
58 TestFunctional/serial/AuditLog 0
59 TestFunctional/serial/SoftStart 39.06
60 TestFunctional/serial/KubeContext 0.04
61 TestFunctional/serial/KubectlGetPods 0.09
64 TestFunctional/serial/CacheCmd/cache/add_remote 3.54
65 TestFunctional/serial/CacheCmd/cache/add_local 1.7
66 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
67 TestFunctional/serial/CacheCmd/cache/list 0.04
68 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
69 TestFunctional/serial/CacheCmd/cache/cache_reload 1.57
70 TestFunctional/serial/CacheCmd/cache/delete 0.09
71 TestFunctional/serial/MinikubeKubectlCmd 0.1
72 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
73 TestFunctional/serial/ExtraConfig 39.13
74 TestFunctional/serial/ComponentHealth 0.07
75 TestFunctional/serial/LogsCmd 1.13
76 TestFunctional/serial/LogsFileCmd 1.1
77 TestFunctional/serial/InvalidService 4.67
79 TestFunctional/parallel/ConfigCmd 0.33
80 TestFunctional/parallel/DashboardCmd 13.88
81 TestFunctional/parallel/DryRun 0.28
82 TestFunctional/parallel/InternationalLanguage 0.13
83 TestFunctional/parallel/StatusCmd 1.06
87 TestFunctional/parallel/ServiceCmdConnect 13.6
88 TestFunctional/parallel/AddonsCmd 0.12
89 TestFunctional/parallel/PersistentVolumeClaim 60.21
91 TestFunctional/parallel/SSHCmd 0.41
92 TestFunctional/parallel/CpCmd 0.89
93 TestFunctional/parallel/MySQL 37.32
94 TestFunctional/parallel/FileSync 0.2
95 TestFunctional/parallel/CertSync 1.35
99 TestFunctional/parallel/NodeLabels 0.06
101 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
103 TestFunctional/parallel/License 0.8
104 TestFunctional/parallel/ServiceCmd/DeployApp 13.27
114 TestFunctional/parallel/DockerEnv/bash 0.82
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.08
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
118 TestFunctional/parallel/Version/short 0.04
119 TestFunctional/parallel/Version/components 1.1
120 TestFunctional/parallel/ServiceCmd/List 0.43
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
123 TestFunctional/parallel/ServiceCmd/Format 0.37
124 TestFunctional/parallel/ServiceCmd/URL 0.39
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.38
126 TestFunctional/parallel/MountCmd/any-port 28.72
127 TestFunctional/parallel/ProfileCmd/profile_list 0.32
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
133 TestFunctional/parallel/ImageCommands/ImageBuild 4.38
134 TestFunctional/parallel/ImageCommands/Setup 2.38
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.15
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.47
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.92
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.75
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.19
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.23
142 TestFunctional/parallel/MountCmd/specific-port 1.84
143 TestFunctional/parallel/MountCmd/VerifyCleanup 1.33
144 TestFunctional/delete_addon-resizer_images 0.07
145 TestFunctional/delete_my-image_image 0.01
146 TestFunctional/delete_minikube_cached_images 0.02
147 TestGvisorAddon 354.11
152 TestIngressAddonLegacy/StartLegacyK8sCluster 95.18
154 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.96
155 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.57
156 TestIngressAddonLegacy/serial/ValidateIngressAddons 39.97
159 TestJSONOutput/start/Command 70.19
160 TestJSONOutput/start/Audit 0
162 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
163 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
165 TestJSONOutput/pause/Command 0.57
166 TestJSONOutput/pause/Audit 0
168 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
169 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
171 TestJSONOutput/unpause/Command 0.54
172 TestJSONOutput/unpause/Audit 0
174 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
175 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
177 TestJSONOutput/stop/Command 7.4
178 TestJSONOutput/stop/Audit 0
180 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
181 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
182 TestErrorJSONOutput 0.19
187 TestMainNoArgs 0.04
188 TestMinikubeProfile 109.29
191 TestMountStart/serial/StartWithMountFirst 31.89
192 TestMountStart/serial/VerifyMountFirst 0.56
193 TestMountStart/serial/StartWithMountSecond 30.72
194 TestMountStart/serial/VerifyMountSecond 0.37
195 TestMountStart/serial/DeleteFirst 0.87
196 TestMountStart/serial/VerifyMountPostDelete 0.38
197 TestMountStart/serial/Stop 2.08
198 TestMountStart/serial/RestartStopped 24.71
199 TestMountStart/serial/VerifyMountPostStop 0.39
202 TestMultiNode/serial/FreshStart2Nodes 129.63
203 TestMultiNode/serial/DeployApp2Nodes 6.03
204 TestMultiNode/serial/PingHostFrom2Pods 0.87
205 TestMultiNode/serial/AddNode 47.89
206 TestMultiNode/serial/ProfileList 0.21
207 TestMultiNode/serial/CopyFile 7.34
208 TestMultiNode/serial/StopNode 3.98
209 TestMultiNode/serial/StartAfterStop 31.12
210 TestMultiNode/serial/RestartKeepsNodes 180.04
211 TestMultiNode/serial/DeleteNode 1.76
212 TestMultiNode/serial/StopMultiNode 25.55
213 TestMultiNode/serial/RestartMultiNode 118.7
214 TestMultiNode/serial/ValidateNameConflict 53.47
219 TestPreload 178.35
221 TestScheduledStopUnix 122.33
222 TestSkaffold 141.83
225 TestRunningBinaryUpgrade 238.71
240 TestStoppedBinaryUpgrade/Setup 1.79
241 TestStoppedBinaryUpgrade/Upgrade 1146.86
243 TestPause/serial/Start 79.86
252 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
253 TestNoKubernetes/serial/StartWithK8s 63.74
254 TestPause/serial/SecondStartNoReconfiguration 52.13
255 TestNetworkPlugins/group/auto/Start 77.56
256 TestPause/serial/Pause 2.42
257 TestPause/serial/VerifyStatus 0.4
258 TestPause/serial/Unpause 0.78
259 TestNoKubernetes/serial/StartWithStopK8s 13.54
260 TestPause/serial/PauseAgain 1.07
261 TestPause/serial/DeletePaused 1.14
262 TestPause/serial/VerifyDeletedResources 0.52
263 TestNetworkPlugins/group/kindnet/Start 79.38
264 TestNoKubernetes/serial/Start 48.98
265 TestNetworkPlugins/group/auto/KubeletFlags 0.22
266 TestNetworkPlugins/group/auto/NetCatPod 16.39
267 TestNetworkPlugins/group/auto/DNS 0.21
268 TestNetworkPlugins/group/auto/Localhost 0.16
269 TestNetworkPlugins/group/auto/HairPin 0.17
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
271 TestNoKubernetes/serial/ProfileList 1.14
272 TestNoKubernetes/serial/Stop 2.16
273 TestNoKubernetes/serial/StartNoArgs 26.62
274 TestNetworkPlugins/group/calico/Start 124.85
275 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
276 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
277 TestNetworkPlugins/group/kindnet/NetCatPod 13.37
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
279 TestNetworkPlugins/group/custom-flannel/Start 104.91
280 TestNetworkPlugins/group/kindnet/DNS 0.21
281 TestNetworkPlugins/group/kindnet/Localhost 0.16
282 TestNetworkPlugins/group/kindnet/HairPin 0.15
283 TestNetworkPlugins/group/false/Start 97.92
284 TestNetworkPlugins/group/calico/ControllerPod 5.05
285 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
286 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.42
287 TestNetworkPlugins/group/calico/KubeletFlags 0.21
288 TestNetworkPlugins/group/calico/NetCatPod 12.42
289 TestNetworkPlugins/group/calico/DNS 0.33
290 TestNetworkPlugins/group/custom-flannel/DNS 0.22
291 TestNetworkPlugins/group/calico/Localhost 0.22
292 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
293 TestNetworkPlugins/group/custom-flannel/HairPin 0.22
294 TestNetworkPlugins/group/calico/HairPin 0.23
295 TestNetworkPlugins/group/false/KubeletFlags 0.22
296 TestNetworkPlugins/group/false/NetCatPod 12.36
297 TestNetworkPlugins/group/false/DNS 0.22
298 TestNetworkPlugins/group/false/Localhost 0.17
299 TestNetworkPlugins/group/false/HairPin 0.18
300 TestNetworkPlugins/group/enable-default-cni/Start 74.9
301 TestNetworkPlugins/group/flannel/Start 109.67
302 TestNetworkPlugins/group/bridge/Start 113.25
303 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
304 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.41
305 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
306 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
307 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
308 TestNetworkPlugins/group/kubenet/Start 73.21
309 TestNetworkPlugins/group/flannel/ControllerPod 5.02
310 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
311 TestNetworkPlugins/group/flannel/NetCatPod 12.43
312 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
313 TestNetworkPlugins/group/bridge/NetCatPod 13.4
314 TestNetworkPlugins/group/flannel/DNS 0.2
315 TestNetworkPlugins/group/flannel/Localhost 0.15
316 TestNetworkPlugins/group/flannel/HairPin 0.17
317 TestNetworkPlugins/group/bridge/DNS 0.4
318 TestNetworkPlugins/group/bridge/Localhost 0.18
319 TestNetworkPlugins/group/bridge/HairPin 0.21
321 TestStartStop/group/old-k8s-version/serial/FirstStart 136.5
323 TestStartStop/group/no-preload/serial/FirstStart 104.2
324 TestNetworkPlugins/group/kubenet/KubeletFlags 0.25
325 TestNetworkPlugins/group/kubenet/NetCatPod 15.48
326 TestNetworkPlugins/group/kubenet/DNS 0.2
327 TestNetworkPlugins/group/kubenet/Localhost 0.17
328 TestNetworkPlugins/group/kubenet/HairPin 0.14
330 TestStartStop/group/embed-certs/serial/FirstStart 78.73
331 TestStartStop/group/no-preload/serial/DeployApp 10.49
332 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
333 TestStartStop/group/no-preload/serial/Stop 13.13
334 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
335 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
336 TestStartStop/group/no-preload/serial/SecondStart 306.52
337 TestStartStop/group/embed-certs/serial/DeployApp 9.5
338 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
339 TestStartStop/group/old-k8s-version/serial/Stop 13.14
340 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
341 TestStartStop/group/embed-certs/serial/Stop 13.13
342 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
343 TestStartStop/group/old-k8s-version/serial/SecondStart 440.29
344 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
345 TestStartStop/group/embed-certs/serial/SecondStart 365.04
346 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
347 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
348 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
349 TestStartStop/group/no-preload/serial/Pause 2.65
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 76.68
352 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 24.03
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 11.53
354 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.26
355 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.14
356 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
357 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
358 TestStartStop/group/embed-certs/serial/Pause 2.67
359 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.73
360 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 349.8
362 TestStartStop/group/newest-cni/serial/FirstStart 107.81
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
364 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
365 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
367 TestStartStop/group/old-k8s-version/serial/Pause 2.65
368 TestStartStop/group/newest-cni/serial/DeployApp 0
369 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
370 TestStartStop/group/newest-cni/serial/Stop 13.11
371 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/newest-cni/serial/SecondStart 45.27
373 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
374 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
375 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
376 TestStartStop/group/newest-cni/serial/Pause 2.29
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 21.02
378 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
379 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
380 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.48
x
+
TestDownloadOnly/v1.16.0/json-events (23.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-220567 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-220567 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (23.612676492s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (23.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-220567
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-220567: exit status 85 (60.439514ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-220567 | jenkins | v1.31.2 | 02 Oct 23 18:23 UTC |          |
	|         | -p download-only-220567        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 18:23:30
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 18:23:30.177398  134038 out.go:296] Setting OutFile to fd 1 ...
	I1002 18:23:30.177667  134038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:23:30.177680  134038 out.go:309] Setting ErrFile to fd 2...
	I1002 18:23:30.177685  134038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:23:30.177857  134038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
	W1002 18:23:30.177979  134038 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17339-126802/.minikube/config/config.json: open /home/jenkins/minikube-integration/17339-126802/.minikube/config/config.json: no such file or directory
	I1002 18:23:30.178535  134038 out.go:303] Setting JSON to true
	I1002 18:23:30.179357  134038 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3956,"bootTime":1696267054,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 18:23:30.179417  134038 start.go:138] virtualization: kvm guest
	I1002 18:23:30.182196  134038 out.go:97] [download-only-220567] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 18:23:30.183994  134038 out.go:169] MINIKUBE_LOCATION=17339
	W1002 18:23:30.182314  134038 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball: no such file or directory
	I1002 18:23:30.182401  134038 notify.go:220] Checking for updates...
	I1002 18:23:30.187491  134038 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 18:23:30.189354  134038 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 18:23:30.190963  134038 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	I1002 18:23:30.192469  134038 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 18:23:30.195391  134038 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 18:23:30.195610  134038 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 18:23:30.319696  134038 out.go:97] Using the kvm2 driver based on user configuration
	I1002 18:23:30.319739  134038 start.go:298] selected driver: kvm2
	I1002 18:23:30.319746  134038 start.go:902] validating driver "kvm2" against <nil>
	I1002 18:23:30.320044  134038 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 18:23:30.320183  134038 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17339-126802/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 18:23:30.336027  134038 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 18:23:30.336096  134038 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I1002 18:23:30.336761  134038 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1002 18:23:30.336948  134038 start_flags.go:905] Wait components to verify : map[apiserver:true system_pods:true]
	I1002 18:23:30.336992  134038 cni.go:84] Creating CNI manager for ""
	I1002 18:23:30.337021  134038 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1002 18:23:30.337032  134038 start_flags.go:321] config:
	{Name:download-only-220567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-220567 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 18:23:30.337309  134038 iso.go:125] acquiring lock: {Name:mkf7650ebae79a7eed75eeedd5ceff434d4c4f84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 18:23:30.339765  134038 out.go:97] Downloading VM boot image ...
	I1002 18:23:30.339813  134038 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17339-126802/.minikube/cache/iso/amd64/minikube-v1.31.0-1695060926-17240-amd64.iso
	I1002 18:23:39.265703  134038 out.go:97] Starting control plane node download-only-220567 in cluster download-only-220567
	I1002 18:23:39.265738  134038 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 18:23:39.365796  134038 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1002 18:23:39.365848  134038 cache.go:57] Caching tarball of preloaded images
	I1002 18:23:39.366026  134038 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1002 18:23:39.368305  134038 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1002 18:23:39.368330  134038 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1002 18:23:39.474332  134038 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-220567"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/json-events (15.44s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-220567 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-220567 --force --alsologtostderr --kubernetes-version=v1.28.2 --container-runtime=docker --driver=kvm2 : (15.442874056s)
--- PASS: TestDownloadOnly/v1.28.2/json-events (15.44s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/preload-exists
--- PASS: TestDownloadOnly/v1.28.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-220567
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-220567: exit status 85 (58.289615ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-220567 | jenkins | v1.31.2 | 02 Oct 23 18:23 UTC |          |
	|         | -p download-only-220567        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-220567 | jenkins | v1.31.2 | 02 Oct 23 18:23 UTC |          |
	|         | -p download-only-220567        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.2   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/10/02 18:23:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1002 18:23:53.852195  134118 out.go:296] Setting OutFile to fd 1 ...
	I1002 18:23:53.852473  134118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:23:53.852487  134118 out.go:309] Setting ErrFile to fd 2...
	I1002 18:23:53.852492  134118 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:23:53.852712  134118 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
	W1002 18:23:53.852855  134118 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17339-126802/.minikube/config/config.json: open /home/jenkins/minikube-integration/17339-126802/.minikube/config/config.json: no such file or directory
	I1002 18:23:53.853320  134118 out.go:303] Setting JSON to true
	I1002 18:23:53.854188  134118 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3980,"bootTime":1696267054,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 18:23:53.854248  134118 start.go:138] virtualization: kvm guest
	I1002 18:23:53.856535  134118 out.go:97] [download-only-220567] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 18:23:53.858274  134118 out.go:169] MINIKUBE_LOCATION=17339
	I1002 18:23:53.856699  134118 notify.go:220] Checking for updates...
	I1002 18:23:53.861347  134118 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 18:23:53.863111  134118 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 18:23:53.864843  134118 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	I1002 18:23:53.866453  134118 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1002 18:23:53.869572  134118 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1002 18:23:53.871065  134118 config.go:182] Loaded profile config "download-only-220567": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1002 18:23:53.871458  134118 start.go:810] api.Load failed for download-only-220567: filestore "download-only-220567": Docker machine "download-only-220567" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 18:23:53.871578  134118 driver.go:373] Setting default libvirt URI to qemu:///system
	W1002 18:23:53.871619  134118 start.go:810] api.Load failed for download-only-220567: filestore "download-only-220567": Docker machine "download-only-220567" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1002 18:23:53.904741  134118 out.go:97] Using the kvm2 driver based on existing profile
	I1002 18:23:53.904775  134118 start.go:298] selected driver: kvm2
	I1002 18:23:53.904783  134118 start.go:902] validating driver "kvm2" against &{Name:download-only-220567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-220567 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 18:23:53.905350  134118 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 18:23:53.905490  134118 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17339-126802/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1002 18:23:53.925870  134118 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I1002 18:23:53.926544  134118 cni.go:84] Creating CNI manager for ""
	I1002 18:23:53.926567  134118 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1002 18:23:53.926581  134118 start_flags.go:321] config:
	{Name:download-only-220567 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.2 ClusterName:download-only-220567 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 18:23:53.926758  134118 iso.go:125] acquiring lock: {Name:mkf7650ebae79a7eed75eeedd5ceff434d4c4f84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1002 18:23:53.928665  134118 out.go:97] Starting control plane node download-only-220567 in cluster download-only-220567
	I1002 18:23:53.928685  134118 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 18:23:54.641748  134118 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 18:23:54.641793  134118 cache.go:57] Caching tarball of preloaded images
	I1002 18:23:54.641969  134118 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 18:23:54.643989  134118 out.go:97] Downloading Kubernetes v1.28.2 preload ...
	I1002 18:23:54.644013  134118 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1002 18:23:55.127858  134118 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.2/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4?checksum=md5:30a5cb95ef165c1e9196502a3ab2be2b -> /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4
	I1002 18:24:07.315784  134118 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1002 18:24:07.315901  134118 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17339-126802/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.2-docker-overlay2-amd64.tar.lz4 ...
	I1002 18:24:08.277152  134118 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.2 on docker
	I1002 18:24:08.277321  134118 profile.go:148] Saving config to /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/download-only-220567/config.json ...
	I1002 18:24:08.277586  134118 preload.go:132] Checking if preload exists for k8s version v1.28.2 and runtime docker
	I1002 18:24:08.277832  134118 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17339-126802/.minikube/cache/linux/amd64/v1.28.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-220567"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-220567
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-437857 --alsologtostderr --binary-mirror http://127.0.0.1:37211 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-437857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-437857
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (71.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-440740 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-440740 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m10.867404794s)
helpers_test.go:175: Cleaning up "offline-docker-440740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-440740
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-440740: (1.064471837s)
--- PASS: TestOffline (71.93s)

                                                
                                    
x
+
TestAddons/Setup (150.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p addons-376551 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p addons-376551 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m30.068417573s)
--- PASS: TestAddons/Setup (150.07s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:308: registry stabilized in 23.768114ms
addons_test.go:310: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ghqhh" [0cec1c54-9410-4448-abdf-9b40f01201e9] Running
addons_test.go:310: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.023488964s
addons_test.go:313: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-szmn6" [f308a600-c381-421c-94a8-6211107d89ae] Running
addons_test.go:313: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.013822692s
addons_test.go:318: (dbg) Run:  kubectl --context addons-376551 delete po -l run=registry-test --now
addons_test.go:323: (dbg) Run:  kubectl --context addons-376551 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:323: (dbg) Done: kubectl --context addons-376551 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.135435572s)
addons_test.go:337: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 ip
2023/10/02 18:26:56 [DEBUG] GET http://192.168.39.72:5000
addons_test.go:366: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) Run:  kubectl --context addons-376551 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:210: (dbg) Run:  kubectl --context addons-376551 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context addons-376551 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [1157a973-8214-400c-9657-d9b734235c72] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [1157a973-8214-400c-9657-d9b734235c72] Running
addons_test.go:228: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.015424476s
addons_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context addons-376551 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.39.72
addons_test.go:284: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-amd64 -p addons-376551 addons disable ingress-dns --alsologtostderr -v=1: (1.016481306s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-amd64 -p addons-376551 addons disable ingress --alsologtostderr -v=1: (7.79304179s)
--- PASS: TestAddons/parallel/Ingress (21.71s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-znj95" [487aa5b9-bb42-43fd-be10-262d200b5cb0] Running
addons_test.go:816: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.011640057s
addons_test.go:819: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-376551
addons_test.go:819: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-376551: (5.927279123s)
--- PASS: TestAddons/parallel/InspektorGadget (10.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:385: metrics-server stabilized in 4.461216ms
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-v9hbp" [91b0aab7-bc2e-463f-80bc-84c5442f5cab] Running
addons_test.go:387: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014473998s
addons_test.go:393: (dbg) Run:  kubectl --context addons-376551 top pods -n kube-system
addons_test.go:410: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.80s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (21.31s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:434: tiller-deploy stabilized in 4.282129ms
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-s4sgr" [096eddf8-81c4-4112-afd5-503484efcf73] Running
addons_test.go:436: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.034814092s
addons_test.go:451: (dbg) Run:  kubectl --context addons-376551 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-376551 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (11.034965375s)
addons_test.go:456: kubectl --context addons-376551 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:451: (dbg) Run:  kubectl --context addons-376551 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:451: (dbg) Done: kubectl --context addons-376551 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.297821013s)
addons_test.go:468: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (21.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 32.101071ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-376551 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-376551 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [5b6cebe9-15b7-4305-a1b4-c2dcf17d5f73] Pending
helpers_test.go:344: "task-pv-pod" [5b6cebe9-15b7-4305-a1b4-c2dcf17d5f73] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [5b6cebe9-15b7-4305-a1b4-c2dcf17d5f73] Running
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.018635126s
addons_test.go:562: (dbg) Run:  kubectl --context addons-376551 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-376551 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-376551 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-376551 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-376551 delete pod task-pv-pod
addons_test.go:578: (dbg) Run:  kubectl --context addons-376551 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-376551 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-376551 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f4537f64-a262-4bb5-a57c-23c927e9febb] Pending
helpers_test.go:344: "task-pv-pod-restore" [f4537f64-a262-4bb5-a57c-23c927e9febb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f4537f64-a262-4bb5-a57c-23c927e9febb] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.015271645s
addons_test.go:604: (dbg) Run:  kubectl --context addons-376551 delete pod task-pv-pod-restore
addons_test.go:608: (dbg) Run:  kubectl --context addons-376551 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-376551 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-amd64 -p addons-376551 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.773668058s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:802: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-376551 --alsologtostderr -v=1
addons_test.go:802: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-376551 --alsologtostderr -v=1: (1.104307994s)
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58b88cff49-jvbzm" [2f9773cb-0257-4eb6-bc1f-7131d13eb953] Pending
helpers_test.go:344: "headlamp-58b88cff49-jvbzm" [2f9773cb-0257-4eb6-bc1f-7131d13eb953] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-jvbzm" [2f9773cb-0257-4eb6-bc1f-7131d13eb953] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58b88cff49-jvbzm" [2f9773cb-0257-4eb6-bc1f-7131d13eb953] Running
addons_test.go:807: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.036097361s
--- PASS: TestAddons/parallel/Headlamp (16.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-7d49f968d9-4x9wx" [c0f1f1e6-429d-48b6-9231-42e49b08491a] Running
addons_test.go:835: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.018341228s
addons_test.go:838: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-376551
--- PASS: TestAddons/parallel/CloudSpanner (5.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (57.55s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:851: (dbg) Run:  kubectl --context addons-376551 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:857: (dbg) Run:  kubectl --context addons-376551 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:861: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [70709197-2b78-4458-ba3a-0a5e2b26e530] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [70709197-2b78-4458-ba3a-0a5e2b26e530] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [70709197-2b78-4458-ba3a-0a5e2b26e530] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:864: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.016301701s
addons_test.go:869: (dbg) Run:  kubectl --context addons-376551 get pvc test-pvc -o=json
addons_test.go:878: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 ssh "cat /opt/local-path-provisioner/pvc-e294544d-3aa2-421c-adab-53b08ab3d50f_default_test-pvc/file1"
addons_test.go:890: (dbg) Run:  kubectl --context addons-376551 delete pod test-local-path
addons_test.go:894: (dbg) Run:  kubectl --context addons-376551 delete pvc test-pvc
addons_test.go:898: (dbg) Run:  out/minikube-linux-amd64 -p addons-376551 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:898: (dbg) Done: out/minikube-linux-amd64 -p addons-376551 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.415952628s)
--- PASS: TestAddons/parallel/LocalPath (57.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:628: (dbg) Run:  kubectl --context addons-376551 create ns new-namespace
addons_test.go:642: (dbg) Run:  kubectl --context addons-376551 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.36s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:150: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-376551
addons_test.go:150: (dbg) Done: out/minikube-linux-amd64 stop -p addons-376551: (13.097624362s)
addons_test.go:154: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-376551
addons_test.go:158: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-376551
addons_test.go:163: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-376551
--- PASS: TestAddons/StoppedEnableDisable (13.36s)

                                                
                                    
x
+
TestCertOptions (66.27s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-481641 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
E1002 19:04:41.534273  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:04:43.370832  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-481641 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m4.770928615s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-481641 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-481641 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-481641 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-481641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-481641
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-481641: (1.054979682s)
--- PASS: TestCertOptions (66.27s)

                                                
                                    
x
+
TestCertExpiration (323.72s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-265161 --memory=2048 --cert-expiration=3m --driver=kvm2 
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-265161 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m46.018880838s)
E1002 19:00:37.630401  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-265161 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E1002 19:03:20.251073  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:03:20.891629  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-265161 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (36.694645705s)
helpers_test.go:175: Cleaning up "cert-expiration-265161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-265161
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-265161: (1.009099107s)
--- PASS: TestCertExpiration (323.72s)

                                                
                                    
x
+
TestDockerFlags (108.43s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-473353 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-473353 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m46.747108374s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-473353 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-473353 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-473353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-473353
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-473353: (1.216904949s)
--- PASS: TestDockerFlags (108.43s)

                                                
                                    
x
+
TestForceSystemdFlag (84.93s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-740341 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-740341 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m23.058114834s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-740341 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-740341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-740341
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-740341: (1.630880497s)
--- PASS: TestForceSystemdFlag (84.93s)

                                                
                                    
x
+
TestForceSystemdEnv (83.84s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-039352 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-039352 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m22.496337351s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-039352 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-039352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-039352
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-039352: (1.08644455s)
--- PASS: TestForceSystemdEnv (83.84s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.65s)

                                                
                                    
x
+
TestErrorSpam/setup (51.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-550960 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-550960 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-550960 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-550960 --driver=kvm2 : (51.226613696s)
--- PASS: TestErrorSpam/setup (51.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.35s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 start --dry-run
--- PASS: TestErrorSpam/start (0.35s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.21s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 pause
--- PASS: TestErrorSpam/pause (1.21s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (12.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 stop: (12.418422148s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-550960 --log_dir /tmp/nospam-550960 stop
--- PASS: TestErrorSpam/stop (12.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17339-126802/.minikube/files/etc/test/nested/copy/134025/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (71.94s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-720299 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-720299 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m11.941950899s)
--- PASS: TestFunctional/serial/StartWithProxy (71.94s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.06s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-720299 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-720299 --alsologtostderr -v=8: (39.058362099s)
functional_test.go:659: soft start took 39.059032232s for "functional-720299" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.06s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-720299 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 cache add registry.k8s.io/pause:3.1: (1.24711008s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 cache add registry.k8s.io/pause:3.3: (1.161007391s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 cache add registry.k8s.io/pause:latest: (1.134116027s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.54s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-720299 /tmp/TestFunctionalserialCacheCmdcacheadd_local2324845372/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 cache add minikube-local-cache-test:functional-720299
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 cache add minikube-local-cache-test:functional-720299: (1.381384267s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 cache delete minikube-local-cache-test:functional-720299
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-720299
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (239.374327ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 kubectl -- --context functional-720299 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-720299 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-720299 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1002 18:31:40.322497  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:40.328275  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:40.338538  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:40.358830  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:40.399181  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:40.479556  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:40.640005  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:40.960671  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:41.601648  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:42.882594  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:45.443290  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:31:50.563760  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:32:00.804736  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-720299 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.133302696s)
functional_test.go:757: restart took 39.133428492s for "functional-720299" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-720299 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 logs: (1.133708882s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 logs --file /tmp/TestFunctionalserialLogsFileCmd3244322325/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 logs --file /tmp/TestFunctionalserialLogsFileCmd3244322325/001/logs.txt: (1.097175256s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.67s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-720299 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-720299
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-720299: exit status 115 (293.978475ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.72:32505 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-720299 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-720299 delete -f testdata/invalidsvc.yaml: (1.033599106s)
--- PASS: TestFunctional/serial/InvalidService (4.67s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 config get cpus: exit status 14 (62.048945ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 config get cpus: exit status 14 (44.527536ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-720299 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-720299 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 141064: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.88s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-720299 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-720299 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (138.855261ms)

                                                
                                                
-- stdout --
	* [functional-720299] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 18:32:53.829246  140807 out.go:296] Setting OutFile to fd 1 ...
	I1002 18:32:53.829358  140807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:32:53.829393  140807 out.go:309] Setting ErrFile to fd 2...
	I1002 18:32:53.829404  140807 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:32:53.829602  140807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
	I1002 18:32:53.830163  140807 out.go:303] Setting JSON to false
	I1002 18:32:53.831073  140807 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4520,"bootTime":1696267054,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 18:32:53.831135  140807 start.go:138] virtualization: kvm guest
	I1002 18:32:53.833652  140807 out.go:177] * [functional-720299] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I1002 18:32:53.835695  140807 out.go:177]   - MINIKUBE_LOCATION=17339
	I1002 18:32:53.835719  140807 notify.go:220] Checking for updates...
	I1002 18:32:53.837763  140807 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 18:32:53.839622  140807 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 18:32:53.841230  140807 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	I1002 18:32:53.842722  140807 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 18:32:53.844672  140807 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 18:32:53.846584  140807 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 18:32:53.847055  140807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:32:53.847117  140807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:32:53.864023  140807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34915
	I1002 18:32:53.864623  140807 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:32:53.865323  140807 main.go:141] libmachine: Using API Version  1
	I1002 18:32:53.865364  140807 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:32:53.865751  140807 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:32:53.865995  140807 main.go:141] libmachine: (functional-720299) Calling .DriverName
	I1002 18:32:53.866261  140807 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 18:32:53.866687  140807 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:32:53.866739  140807 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:32:53.884345  140807 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46605
	I1002 18:32:53.884860  140807 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:32:53.885340  140807 main.go:141] libmachine: Using API Version  1
	I1002 18:32:53.885403  140807 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:32:53.885754  140807 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:32:53.885980  140807 main.go:141] libmachine: (functional-720299) Calling .DriverName
	I1002 18:32:53.921720  140807 out.go:177] * Using the kvm2 driver based on existing profile
	I1002 18:32:53.923785  140807 start.go:298] selected driver: kvm2
	I1002 18:32:53.923805  140807 start.go:902] validating driver "kvm2" against &{Name:functional-720299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-720299 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.72 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 18:32:53.923927  140807 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 18:32:53.926330  140807 out.go:177] 
	W1002 18:32:53.928097  140807 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1002 18:32:53.929618  140807 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-720299 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-720299 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-720299 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (134.360122ms)

                                                
                                                
-- stdout --
	* [functional-720299] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 18:32:54.110852  140863 out.go:296] Setting OutFile to fd 1 ...
	I1002 18:32:54.110953  140863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:32:54.110963  140863 out.go:309] Setting ErrFile to fd 2...
	I1002 18:32:54.110967  140863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:32:54.111260  140863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
	I1002 18:32:54.111785  140863 out.go:303] Setting JSON to false
	I1002 18:32:54.112665  140863 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4520,"bootTime":1696267054,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1002 18:32:54.112734  140863 start.go:138] virtualization: kvm guest
	I1002 18:32:54.115356  140863 out.go:177] * [functional-720299] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I1002 18:32:54.117462  140863 notify.go:220] Checking for updates...
	I1002 18:32:54.117477  140863 out.go:177]   - MINIKUBE_LOCATION=17339
	I1002 18:32:54.119095  140863 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1002 18:32:54.120533  140863 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	I1002 18:32:54.122083  140863 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	I1002 18:32:54.123632  140863 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1002 18:32:54.125485  140863 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1002 18:32:54.127605  140863 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 18:32:54.128280  140863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:32:54.128406  140863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:32:54.143568  140863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39155
	I1002 18:32:54.144092  140863 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:32:54.144673  140863 main.go:141] libmachine: Using API Version  1
	I1002 18:32:54.144696  140863 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:32:54.145126  140863 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:32:54.145326  140863 main.go:141] libmachine: (functional-720299) Calling .DriverName
	I1002 18:32:54.145594  140863 driver.go:373] Setting default libvirt URI to qemu:///system
	I1002 18:32:54.145903  140863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:32:54.145953  140863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:32:54.160439  140863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I1002 18:32:54.160985  140863 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:32:54.161580  140863 main.go:141] libmachine: Using API Version  1
	I1002 18:32:54.161625  140863 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:32:54.161972  140863 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:32:54.162199  140863 main.go:141] libmachine: (functional-720299) Calling .DriverName
	I1002 18:32:54.196671  140863 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1002 18:32:54.198422  140863 start.go:298] selected driver: kvm2
	I1002 18:32:54.198445  140863 start.go:902] validating driver "kvm2" against &{Name:functional-720299 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17240/minikube-v1.31.0-1695060926-17240-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1696241247-17339@sha256:77c3e98870a99538e39ecb73a5e5230b746fa8c633c297c3d287ad4bba01a880 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.2 ClusterName:functional-720299 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.72 Port:8441 KubernetesVersion:v1.28.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Ce
rtExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I1002 18:32:54.198586  140863 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1002 18:32:54.200905  140863 out.go:177] 
	W1002 18:32:54.203185  140863 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1002 18:32:54.204677  140863 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-720299 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-720299 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-xthtj" [9fa804de-784a-45e3-bdae-d648b7390a33] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-xthtj" [9fa804de-784a-45e3-bdae-d648b7390a33] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.029732532s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.72:31477
functional_test.go:1674: http://192.168.39.72:31477: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-xthtj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.72:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.72:31477
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (60.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [608d37f1-ed49-432c-aca8-fb79ab5663e9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.015148194s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-720299 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-720299 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-720299 get pvc myclaim -o=json
E1002 18:32:21.285700  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-720299 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-720299 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [797b1961-d2b5-4c0f-ac6d-ffe1edbf75f6] Pending
helpers_test.go:344: "sp-pod" [797b1961-d2b5-4c0f-ac6d-ffe1edbf75f6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [797b1961-d2b5-4c0f-ac6d-ffe1edbf75f6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 37.022310403s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-720299 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-720299 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-720299 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ca119c07-4dc0-4aea-a8cc-7d45a9e131ec] Pending
helpers_test.go:344: "sp-pod" [ca119c07-4dc0-4aea-a8cc-7d45a9e131ec] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ca119c07-4dc0-4aea-a8cc-7d45a9e131ec] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.031813229s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-720299 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (60.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh -n functional-720299 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 cp functional-720299:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3483441267/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh -n functional-720299 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (37.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-720299 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-pttqb" [37b91bcb-90a0-460b-9468-1c475312a385] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-pttqb" [37b91bcb-90a0-460b-9468-1c475312a385] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 31.019209699s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-720299 exec mysql-859648c796-pttqb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-720299 exec mysql-859648c796-pttqb -- mysql -ppassword -e "show databases;": exit status 1 (305.941372ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-720299 exec mysql-859648c796-pttqb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-720299 exec mysql-859648c796-pttqb -- mysql -ppassword -e "show databases;": exit status 1 (487.861689ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-720299 exec mysql-859648c796-pttqb -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-720299 exec mysql-859648c796-pttqb -- mysql -ppassword -e "show databases;": exit status 1 (585.389757ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-720299 exec mysql-859648c796-pttqb -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (37.32s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/134025/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo cat /etc/test/nested/copy/134025/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/134025.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo cat /etc/ssl/certs/134025.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/134025.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo cat /usr/share/ca-certificates/134025.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/1340252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo cat /etc/ssl/certs/1340252.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/1340252.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo cat /usr/share/ca-certificates/1340252.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-720299 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 ssh "sudo systemctl is-active crio": exit status 1 (207.498838ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-720299 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-720299 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-r8mf8" [97070337-9643-4c2a-a230-d18c4ba0221a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-r8mf8" [97070337-9643-4c2a-a230-d18c4ba0221a] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.026549383s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.27s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-720299 docker-env) && out/minikube-linux-amd64 status -p functional-720299"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-720299 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 version -o=json --components: (1.104033466s)
--- PASS: TestFunctional/parallel/Version/components (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 service list -o json
functional_test.go:1493: Took "446.863948ms" to run "out/minikube-linux-amd64 -p functional-720299 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.72:32197
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.72:32197
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (28.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdany-port187355278/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1696271550504569579" to /tmp/TestFunctionalparallelMountCmdany-port187355278/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1696271550504569579" to /tmp/TestFunctionalparallelMountCmdany-port187355278/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1696271550504569579" to /tmp/TestFunctionalparallelMountCmdany-port187355278/001/test-1696271550504569579
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (208.664942ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  2 18:32 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  2 18:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  2 18:32 test-1696271550504569579
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh cat /mount-9p/test-1696271550504569579
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-720299 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [51a1e422-99e7-48f6-94bc-17fc0d03bb04] Pending
helpers_test.go:344: "busybox-mount" [51a1e422-99e7-48f6-94bc-17fc0d03bb04] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [51a1e422-99e7-48f6-94bc-17fc0d03bb04] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [51a1e422-99e7-48f6-94bc-17fc0d03bb04] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 26.013623493s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-720299 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdany-port187355278/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (28.72s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "272.181114ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "44.624894ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "266.067522ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "38.755412ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-720299 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-720299
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-720299
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-720299 image ls --format short --alsologtostderr:
I1002 18:32:57.212448  141073 out.go:296] Setting OutFile to fd 1 ...
I1002 18:32:57.212566  141073 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:32:57.212579  141073 out.go:309] Setting ErrFile to fd 2...
I1002 18:32:57.212586  141073 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:32:57.212784  141073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
I1002 18:32:57.213326  141073 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:32:57.213435  141073 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:32:57.213826  141073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:32:57.213869  141073 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:32:57.228918  141073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44555
I1002 18:32:57.229384  141073 main.go:141] libmachine: () Calling .GetVersion
I1002 18:32:57.229935  141073 main.go:141] libmachine: Using API Version  1
I1002 18:32:57.229961  141073 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:32:57.230333  141073 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:32:57.230524  141073 main.go:141] libmachine: (functional-720299) Calling .GetState
I1002 18:32:57.232471  141073 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:32:57.232528  141073 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:32:57.247523  141073 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37591
I1002 18:32:57.248035  141073 main.go:141] libmachine: () Calling .GetVersion
I1002 18:32:57.248600  141073 main.go:141] libmachine: Using API Version  1
I1002 18:32:57.248628  141073 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:32:57.248981  141073 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:32:57.249196  141073 main.go:141] libmachine: (functional-720299) Calling .DriverName
I1002 18:32:57.249430  141073 ssh_runner.go:195] Run: systemctl --version
I1002 18:32:57.249460  141073 main.go:141] libmachine: (functional-720299) Calling .GetSSHHostname
I1002 18:32:57.252370  141073 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:32:57.252775  141073 main.go:141] libmachine: (functional-720299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:84:47", ip: ""} in network mk-functional-720299: {Iface:virbr1 ExpiryTime:2023-10-02 19:29:45 +0000 UTC Type:0 Mac:52:54:00:66:84:47 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-720299 Clientid:01:52:54:00:66:84:47}
I1002 18:32:57.252812  141073 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined IP address 192.168.39.72 and MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:32:57.252974  141073 main.go:141] libmachine: (functional-720299) Calling .GetSSHPort
I1002 18:32:57.253176  141073 main.go:141] libmachine: (functional-720299) Calling .GetSSHKeyPath
I1002 18:32:57.253335  141073 main.go:141] libmachine: (functional-720299) Calling .GetSSHUsername
I1002 18:32:57.253533  141073 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/functional-720299/id_rsa Username:docker}
I1002 18:32:57.375391  141073 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 18:32:57.424499  141073 main.go:141] libmachine: Making call to close driver server
I1002 18:32:57.424515  141073 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:32:57.424816  141073 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:32:57.424842  141073 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 18:32:57.424853  141073 main.go:141] libmachine: Making call to close driver server
I1002 18:32:57.424864  141073 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:32:57.424825  141073 main.go:141] libmachine: (functional-720299) DBG | Closing plugin on server side
I1002 18:32:57.425097  141073 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:32:57.425116  141073 main.go:141] libmachine: (functional-720299) DBG | Closing plugin on server side
I1002 18:32:57.425124  141073 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-720299 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler              | v1.28.2           | 7a5d9d67a13f6 | 60.1MB |
| docker.io/library/mysql                     | 5.7               | 92034fe9a41f4 | 581MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 61395b4c586da | 187MB  |
| registry.k8s.io/kube-controller-manager     | v1.28.2           | 55f13c92defb1 | 122MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| registry.k8s.io/kube-proxy                  | v1.28.2           | c120fed2beb84 | 73.1MB |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/google-containers/addon-resizer      | functional-720299 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/localhost/my-image                | functional-720299 | af5d51a5327d0 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-720299 | b6fa2e4ccf7f5 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.28.2           | cdcab12b2dd16 | 126MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-720299 image ls --format table --alsologtostderr:
I1002 18:33:02.349981  141699 out.go:296] Setting OutFile to fd 1 ...
I1002 18:33:02.350093  141699 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:33:02.350108  141699 out.go:309] Setting ErrFile to fd 2...
I1002 18:33:02.350115  141699 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:33:02.350320  141699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
I1002 18:33:02.350898  141699 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:33:02.351022  141699 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:33:02.351442  141699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:33:02.351500  141699 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:33:02.367872  141699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37515
I1002 18:33:02.368453  141699 main.go:141] libmachine: () Calling .GetVersion
I1002 18:33:02.369118  141699 main.go:141] libmachine: Using API Version  1
I1002 18:33:02.369150  141699 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:33:02.369590  141699 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:33:02.369817  141699 main.go:141] libmachine: (functional-720299) Calling .GetState
I1002 18:33:02.372178  141699 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:33:02.372224  141699 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:33:02.388398  141699 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39067
I1002 18:33:02.388978  141699 main.go:141] libmachine: () Calling .GetVersion
I1002 18:33:02.389591  141699 main.go:141] libmachine: Using API Version  1
I1002 18:33:02.389620  141699 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:33:02.390163  141699 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:33:02.390377  141699 main.go:141] libmachine: (functional-720299) Calling .DriverName
I1002 18:33:02.390650  141699 ssh_runner.go:195] Run: systemctl --version
I1002 18:33:02.390682  141699 main.go:141] libmachine: (functional-720299) Calling .GetSSHHostname
I1002 18:33:02.394030  141699 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:33:02.394540  141699 main.go:141] libmachine: (functional-720299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:84:47", ip: ""} in network mk-functional-720299: {Iface:virbr1 ExpiryTime:2023-10-02 19:29:45 +0000 UTC Type:0 Mac:52:54:00:66:84:47 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-720299 Clientid:01:52:54:00:66:84:47}
I1002 18:33:02.394588  141699 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined IP address 192.168.39.72 and MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:33:02.394768  141699 main.go:141] libmachine: (functional-720299) Calling .GetSSHPort
I1002 18:33:02.395001  141699 main.go:141] libmachine: (functional-720299) Calling .GetSSHKeyPath
I1002 18:33:02.395168  141699 main.go:141] libmachine: (functional-720299) Calling .GetSSHUsername
I1002 18:33:02.395326  141699 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/functional-720299/id_rsa Username:docker}
I1002 18:33:02.479559  141699 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 18:33:02.521023  141699 main.go:141] libmachine: Making call to close driver server
I1002 18:33:02.521045  141699 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:33:02.521343  141699 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:33:02.521366  141699 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 18:33:02.521390  141699 main.go:141] libmachine: Making call to close driver server
I1002 18:33:02.521400  141699 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:33:02.521693  141699 main.go:141] libmachine: (functional-720299) DBG | Closing plugin on server side
I1002 18:33:02.521693  141699 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:33:02.521732  141699 main.go:141] libmachine: Making call to close connection to plugin binary
2023/10/02 18:33:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-720299 image ls --format json --alsologtostderr:
[{"id":"af5d51a5327d078b1362bc621dfd8ddcac76d1ee9695571f6e8d857bf2265d87","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-720299"],"size":"1240000"},{"id":"cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.2"],"size":"126000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-720299"],"size":"32900000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"b6fa2e4ccf7f5640da035b2e7244bb35f9aabc6597882b21e0e5583bd3cb5005","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-720299"],"size":"30"},{"id"
:"c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.2"],"size":"73100000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57","repoDig
ests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.2"],"size":"122000000"},{"id":"7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.2"],"size":"60100000"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"581000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-720299 image ls --format json --alsologtostderr:
I1002 18:33:02.104163  141647 out.go:296] Setting OutFile to fd 1 ...
I1002 18:33:02.104464  141647 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:33:02.104479  141647 out.go:309] Setting ErrFile to fd 2...
I1002 18:33:02.104485  141647 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:33:02.104750  141647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
I1002 18:33:02.105450  141647 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:33:02.105596  141647 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:33:02.106126  141647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:33:02.106175  141647 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:33:02.126440  141647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40553
I1002 18:33:02.127080  141647 main.go:141] libmachine: () Calling .GetVersion
I1002 18:33:02.127853  141647 main.go:141] libmachine: Using API Version  1
I1002 18:33:02.127899  141647 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:33:02.128366  141647 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:33:02.128606  141647 main.go:141] libmachine: (functional-720299) Calling .GetState
I1002 18:33:02.131317  141647 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:33:02.131373  141647 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:33:02.152356  141647 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41839
I1002 18:33:02.152932  141647 main.go:141] libmachine: () Calling .GetVersion
I1002 18:33:02.153450  141647 main.go:141] libmachine: Using API Version  1
I1002 18:33:02.153471  141647 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:33:02.153892  141647 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:33:02.154146  141647 main.go:141] libmachine: (functional-720299) Calling .DriverName
I1002 18:33:02.154444  141647 ssh_runner.go:195] Run: systemctl --version
I1002 18:33:02.154475  141647 main.go:141] libmachine: (functional-720299) Calling .GetSSHHostname
I1002 18:33:02.157840  141647 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:33:02.158283  141647 main.go:141] libmachine: (functional-720299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:84:47", ip: ""} in network mk-functional-720299: {Iface:virbr1 ExpiryTime:2023-10-02 19:29:45 +0000 UTC Type:0 Mac:52:54:00:66:84:47 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-720299 Clientid:01:52:54:00:66:84:47}
I1002 18:33:02.158316  141647 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined IP address 192.168.39.72 and MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:33:02.158454  141647 main.go:141] libmachine: (functional-720299) Calling .GetSSHPort
I1002 18:33:02.158660  141647 main.go:141] libmachine: (functional-720299) Calling .GetSSHKeyPath
I1002 18:33:02.158811  141647 main.go:141] libmachine: (functional-720299) Calling .GetSSHUsername
I1002 18:33:02.158966  141647 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/functional-720299/id_rsa Username:docker}
I1002 18:33:02.267532  141647 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 18:33:02.303060  141647 main.go:141] libmachine: Making call to close driver server
I1002 18:33:02.303085  141647 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:33:02.303406  141647 main.go:141] libmachine: (functional-720299) DBG | Closing plugin on server side
I1002 18:33:02.303461  141647 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:33:02.303475  141647 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 18:33:02.303533  141647 main.go:141] libmachine: Making call to close driver server
I1002 18:33:02.303550  141647 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:33:02.303789  141647 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:33:02.303811  141647 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-720299 image ls --format yaml --alsologtostderr:
- id: 7a5d9d67a13f6ae031989bc2969ec55b06437725f397e6eb75b1dccac465a7b8
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.2
size: "60100000"
- id: 55f13c92defb1eb854040a76e366da866bdcb1cc31fd97b2cde94433c8bf3f57
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.2
size: "122000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 61395b4c586da2b9b3b7ca903ea6a448e6783dfdd7f768ff2c1a0f3360aaba99
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "581000000"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: cdcab12b2dd16cce4efc5dd43c082469364f19ad978e922d110b74a42eff7cce
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.2
size: "126000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: b6fa2e4ccf7f5640da035b2e7244bb35f9aabc6597882b21e0e5583bd3cb5005
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-720299
size: "30"
- id: c120fed2beb84b861c2382ce81ab046c0ae612e91264ef7c9e61df5900fa0bb0
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.2
size: "73100000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-720299
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-720299 image ls --format yaml --alsologtostderr:
I1002 18:32:57.476154  141097 out.go:296] Setting OutFile to fd 1 ...
I1002 18:32:57.476354  141097 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:32:57.476375  141097 out.go:309] Setting ErrFile to fd 2...
I1002 18:32:57.476391  141097 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:32:57.476764  141097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
I1002 18:32:57.477652  141097 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:32:57.477815  141097 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:32:57.478349  141097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:32:57.478419  141097 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:32:57.493063  141097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35305
I1002 18:32:57.493553  141097 main.go:141] libmachine: () Calling .GetVersion
I1002 18:32:57.494144  141097 main.go:141] libmachine: Using API Version  1
I1002 18:32:57.494171  141097 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:32:57.494498  141097 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:32:57.494734  141097 main.go:141] libmachine: (functional-720299) Calling .GetState
I1002 18:32:57.496735  141097 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:32:57.496796  141097 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:32:57.511318  141097 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36441
I1002 18:32:57.511926  141097 main.go:141] libmachine: () Calling .GetVersion
I1002 18:32:57.512496  141097 main.go:141] libmachine: Using API Version  1
I1002 18:32:57.512521  141097 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:32:57.512908  141097 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:32:57.513164  141097 main.go:141] libmachine: (functional-720299) Calling .DriverName
I1002 18:32:57.513438  141097 ssh_runner.go:195] Run: systemctl --version
I1002 18:32:57.513471  141097 main.go:141] libmachine: (functional-720299) Calling .GetSSHHostname
I1002 18:32:57.516255  141097 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:32:57.516735  141097 main.go:141] libmachine: (functional-720299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:84:47", ip: ""} in network mk-functional-720299: {Iface:virbr1 ExpiryTime:2023-10-02 19:29:45 +0000 UTC Type:0 Mac:52:54:00:66:84:47 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-720299 Clientid:01:52:54:00:66:84:47}
I1002 18:32:57.516782  141097 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined IP address 192.168.39.72 and MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:32:57.516862  141097 main.go:141] libmachine: (functional-720299) Calling .GetSSHPort
I1002 18:32:57.517075  141097 main.go:141] libmachine: (functional-720299) Calling .GetSSHKeyPath
I1002 18:32:57.517233  141097 main.go:141] libmachine: (functional-720299) Calling .GetSSHUsername
I1002 18:32:57.517427  141097 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/functional-720299/id_rsa Username:docker}
I1002 18:32:57.617996  141097 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1002 18:32:57.672308  141097 main.go:141] libmachine: Making call to close driver server
I1002 18:32:57.672321  141097 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:32:57.672608  141097 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:32:57.672626  141097 main.go:141] libmachine: (functional-720299) DBG | Closing plugin on server side
I1002 18:32:57.672638  141097 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 18:32:57.672654  141097 main.go:141] libmachine: Making call to close driver server
I1002 18:32:57.672663  141097 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:32:57.672963  141097 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:32:57.672979  141097 main.go:141] libmachine: (functional-720299) DBG | Closing plugin on server side
I1002 18:32:57.672987  141097 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 ssh pgrep buildkitd: exit status 1 (178.748825ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image build -t localhost/my-image:functional-720299 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 image build -t localhost/my-image:functional-720299 testdata/build --alsologtostderr: (3.889380134s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-720299 image build -t localhost/my-image:functional-720299 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 4c67e316f6f9
Removing intermediate container 4c67e316f6f9
---> c1f638dc8a65
Step 3/3 : ADD content.txt /
---> af5d51a5327d
Successfully built af5d51a5327d
Successfully tagged localhost/my-image:functional-720299
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-720299 image build -t localhost/my-image:functional-720299 testdata/build --alsologtostderr:
I1002 18:32:57.893762  141152 out.go:296] Setting OutFile to fd 1 ...
I1002 18:32:57.894037  141152 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:32:57.894052  141152 out.go:309] Setting ErrFile to fd 2...
I1002 18:32:57.894056  141152 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1002 18:32:57.894251  141152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
I1002 18:32:57.894807  141152 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:32:57.895336  141152 config.go:182] Loaded profile config "functional-720299": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
I1002 18:32:57.895722  141152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:32:57.895773  141152 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:32:57.912052  141152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35893
I1002 18:32:57.912484  141152 main.go:141] libmachine: () Calling .GetVersion
I1002 18:32:57.912989  141152 main.go:141] libmachine: Using API Version  1
I1002 18:32:57.913014  141152 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:32:57.913342  141152 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:32:57.913597  141152 main.go:141] libmachine: (functional-720299) Calling .GetState
I1002 18:32:57.915629  141152 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1002 18:32:57.915675  141152 main.go:141] libmachine: Launching plugin server for driver kvm2
I1002 18:32:57.930485  141152 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40989
I1002 18:32:57.931031  141152 main.go:141] libmachine: () Calling .GetVersion
I1002 18:32:57.931526  141152 main.go:141] libmachine: Using API Version  1
I1002 18:32:57.931549  141152 main.go:141] libmachine: () Calling .SetConfigRaw
I1002 18:32:57.931911  141152 main.go:141] libmachine: () Calling .GetMachineName
I1002 18:32:57.932120  141152 main.go:141] libmachine: (functional-720299) Calling .DriverName
I1002 18:32:57.932379  141152 ssh_runner.go:195] Run: systemctl --version
I1002 18:32:57.932406  141152 main.go:141] libmachine: (functional-720299) Calling .GetSSHHostname
I1002 18:32:57.935410  141152 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:32:57.935745  141152 main.go:141] libmachine: (functional-720299) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:66:84:47", ip: ""} in network mk-functional-720299: {Iface:virbr1 ExpiryTime:2023-10-02 19:29:45 +0000 UTC Type:0 Mac:52:54:00:66:84:47 Iaid: IPaddr:192.168.39.72 Prefix:24 Hostname:functional-720299 Clientid:01:52:54:00:66:84:47}
I1002 18:32:57.935785  141152 main.go:141] libmachine: (functional-720299) DBG | domain functional-720299 has defined IP address 192.168.39.72 and MAC address 52:54:00:66:84:47 in network mk-functional-720299
I1002 18:32:57.935950  141152 main.go:141] libmachine: (functional-720299) Calling .GetSSHPort
I1002 18:32:57.936146  141152 main.go:141] libmachine: (functional-720299) Calling .GetSSHKeyPath
I1002 18:32:57.936298  141152 main.go:141] libmachine: (functional-720299) Calling .GetSSHUsername
I1002 18:32:57.936428  141152 sshutil.go:53] new ssh client: &{IP:192.168.39.72 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/functional-720299/id_rsa Username:docker}
I1002 18:32:58.023329  141152 build_images.go:151] Building image from path: /tmp/build.2169496757.tar
I1002 18:32:58.023400  141152 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1002 18:32:58.032896  141152 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2169496757.tar
I1002 18:32:58.037228  141152 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2169496757.tar: stat -c "%s %y" /var/lib/minikube/build/build.2169496757.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2169496757.tar': No such file or directory
I1002 18:32:58.037263  141152 ssh_runner.go:362] scp /tmp/build.2169496757.tar --> /var/lib/minikube/build/build.2169496757.tar (3072 bytes)
I1002 18:32:58.060633  141152 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2169496757
I1002 18:32:58.069196  141152 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2169496757 -xf /var/lib/minikube/build/build.2169496757.tar
I1002 18:32:58.077525  141152 docker.go:340] Building image: /var/lib/minikube/build/build.2169496757
I1002 18:32:58.077607  141152 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-720299 /var/lib/minikube/build/build.2169496757
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1002 18:33:01.695027  141152 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-720299 /var/lib/minikube/build/build.2169496757: (3.617389078s)
I1002 18:33:01.695109  141152 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2169496757
I1002 18:33:01.713495  141152 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2169496757.tar
I1002 18:33:01.735803  141152 build_images.go:207] Built localhost/my-image:functional-720299 from /tmp/build.2169496757.tar
I1002 18:33:01.735833  141152 build_images.go:123] succeeded building to: functional-720299
I1002 18:33:01.735837  141152 build_images.go:124] failed building to: 
I1002 18:33:01.735899  141152 main.go:141] libmachine: Making call to close driver server
I1002 18:33:01.735926  141152 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:33:01.736231  141152 main.go:141] libmachine: (functional-720299) DBG | Closing plugin on server side
I1002 18:33:01.736266  141152 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:33:01.736277  141152 main.go:141] libmachine: Making call to close connection to plugin binary
I1002 18:33:01.736295  141152 main.go:141] libmachine: Making call to close driver server
I1002 18:33:01.736308  141152 main.go:141] libmachine: (functional-720299) Calling .Close
I1002 18:33:01.736547  141152 main.go:141] libmachine: Successfully made call to close driver server
I1002 18:33:01.736572  141152 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.354492686s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-720299
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image load --daemon gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 image load --daemon gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr: (3.936887042s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image load --daemon gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 image load --daemon gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr: (2.260505512s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.8565918s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-720299
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image load --daemon gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 image load --daemon gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr: (4.814699158s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image save gcr.io/google-containers/addon-resizer:functional-720299 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 image save gcr.io/google-containers/addon-resizer:functional-720299 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.750896877s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image rm gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.927071492s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-720299
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 image save --daemon gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-720299 image save --daemon gcr.io/google-containers/addon-resizer:functional-720299 --alsologtostderr: (2.190249539s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-720299
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.23s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdspecific-port1076490569/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (235.98918ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdspecific-port1076490569/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 ssh "sudo umount -f /mount-9p": exit status 1 (231.407923ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-720299 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdspecific-port1076490569/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1181861806/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1181861806/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1181861806/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T" /mount1: exit status 1 (271.12181ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-720299 ssh "findmnt -T" /mount3
E1002 18:33:02.246657  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-720299 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1181861806/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1181861806/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-720299 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1181861806/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.33s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-720299
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-720299
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-720299
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestGvisorAddon (354.11s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-566865 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-566865 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m31.907724883s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-566865 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-566865 cache add gcr.io/k8s-minikube/gvisor-addon:2: (23.136655826s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-566865 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-566865 addons enable gvisor: (5.344730158s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [408e6857-aa01-464a-9e45-9a20140b2423] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.037213603s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-566865 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [8e422366-25fc-4b64-86de-f48291fee10a] Pending
helpers_test.go:344: "nginx-gvisor" [8e422366-25fc-4b64-86de-f48291fee10a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1002 19:01:40.322350  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
helpers_test.go:344: "nginx-gvisor" [8e422366-25fc-4b64-86de-f48291fee10a] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 15.019946049s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-566865
E1002 19:02:15.201818  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-566865: (1m32.496305803s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-566865 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1002 19:03:24.732989  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-566865 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (49.227119424s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [408e6857-aa01-464a-9e45-9a20140b2423] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [408e6857-aa01-464a-9e45-9a20140b2423] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.071593897s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [8e422366-25fc-4b64-86de-f48291fee10a] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.130822924s
helpers_test.go:175: Cleaning up "gvisor-566865" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-566865
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-566865: (1.209950255s)
--- PASS: TestGvisorAddon (354.11s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (95.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-056933 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1002 18:34:24.167457  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-056933 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m35.17721006s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (95.18s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.96s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056933 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-056933 addons enable ingress --alsologtostderr -v=5: (17.95803371s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.96s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056933 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (39.97s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:185: (dbg) Run:  kubectl --context ingress-addon-legacy-056933 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Done: kubectl --context ingress-addon-legacy-056933 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (8.067003109s)
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-056933 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:223: (dbg) Run:  kubectl --context ingress-addon-legacy-056933 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c16ff024-ec57-468d-8172-0807940485d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c16ff024-ec57-468d-8172-0807940485d4] Running
addons_test.go:228: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.037734287s
addons_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056933 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Run:  kubectl --context ingress-addon-legacy-056933 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:269: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056933 ip
addons_test.go:275: (dbg) Run:  nslookup hello-john.test 192.168.39.214
addons_test.go:284: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056933 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:284: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-056933 addons disable ingress-dns --alsologtostderr -v=1: (12.014649018s)
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-056933 addons disable ingress --alsologtostderr -v=1
addons_test.go:289: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-056933 addons disable ingress --alsologtostderr -v=1: (7.494744169s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (39.97s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.19s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-558738 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1002 18:36:40.322659  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:37:08.008588  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:37:15.202172  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:15.207582  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:15.217922  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:15.238281  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:15.278654  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:15.359060  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:15.519571  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:15.840259  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:16.480954  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:17.761613  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:20.321831  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:37:25.442233  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-558738 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m10.193760382s)
--- PASS: TestJSONOutput/start/Command (70.19s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-558738 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-558738 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-558738 --output=json --user=testUser
E1002 18:37:35.682689  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-558738 --output=json --user=testUser: (7.395351571s)
--- PASS: TestJSONOutput/stop/Command (7.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-912791 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-912791 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (61.633703ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"448a6610-220b-4e75-92b5-42ba9117ae30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-912791] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c695a05a-6d74-41d7-97e7-1e09398217d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17339"}}
	{"specversion":"1.0","id":"5b2066a3-05d1-435d-ba7e-36467418f655","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"bae058c4-a866-4163-ad94-a0b7822223eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig"}}
	{"specversion":"1.0","id":"722ffa59-18c6-4f8d-b96a-b1b3eea578ca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube"}}
	{"specversion":"1.0","id":"9296d951-fb78-4fe9-bf0d-cd93c39280f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c568272f-0b17-4086-bddd-5adcf7035777","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"42f674aa-e26a-4b79-affc-20aa53dfa5ae","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-912791" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-912791
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (109.29s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-850455 --driver=kvm2 
E1002 18:37:56.163674  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-850455 --driver=kvm2 : (52.488850682s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-853987 --driver=kvm2 
E1002 18:38:37.125504  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-853987 --driver=kvm2 : (53.896002945s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-850455
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-853987
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-853987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-853987
helpers_test.go:175: Cleaning up "first-850455" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-850455
--- PASS: TestMinikubeProfile (109.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-534857 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-534857 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.884944868s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-534857 ssh -- ls /minikube-host
E1002 18:39:59.046588  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-534857 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.72s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-553491 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-553491 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.721915789s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-553491 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-553491 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-534857 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-553491 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-553491 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.08s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-553491
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-553491: (2.078350633s)
--- PASS: TestMountStart/serial/Stop (2.08s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (24.71s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-553491
E1002 18:40:37.629610  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:37.634892  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:37.645152  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:37.665474  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:37.705782  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:37.786139  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:37.946589  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:38.267246  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:38.908311  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:40.188965  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:42.750833  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:40:47.871263  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-553491: (23.708196626s)
E1002 18:40:58.111605  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
--- PASS: TestMountStart/serial/RestartStopped (24.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-553491 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-553491 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603165 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1002 18:41:18.592140  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:41:40.321941  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:41:59.552471  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:42:15.201454  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 18:42:42.887378  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-603165 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m9.198786613s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-603165 -- rollout status deployment/busybox: (4.254020562s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-4kbhz -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-gmwfl -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-4kbhz -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-gmwfl -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-4kbhz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-gmwfl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-4kbhz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-4kbhz -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-gmwfl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603165 -- exec busybox-5bc68d56bd-gmwfl -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-603165 -v 3 --alsologtostderr
E1002 18:43:21.473397  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-603165 -v 3 --alsologtostderr: (47.290463165s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.89s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp testdata/cp-test.txt multinode-603165:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1928602203/001/cp-test_multinode-603165.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165:/home/docker/cp-test.txt multinode-603165-m02:/home/docker/cp-test_multinode-603165_multinode-603165-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m02 "sudo cat /home/docker/cp-test_multinode-603165_multinode-603165-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165:/home/docker/cp-test.txt multinode-603165-m03:/home/docker/cp-test_multinode-603165_multinode-603165-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m03 "sudo cat /home/docker/cp-test_multinode-603165_multinode-603165-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp testdata/cp-test.txt multinode-603165-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1928602203/001/cp-test_multinode-603165-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165-m02:/home/docker/cp-test.txt multinode-603165:/home/docker/cp-test_multinode-603165-m02_multinode-603165.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165 "sudo cat /home/docker/cp-test_multinode-603165-m02_multinode-603165.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165-m02:/home/docker/cp-test.txt multinode-603165-m03:/home/docker/cp-test_multinode-603165-m02_multinode-603165-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m03 "sudo cat /home/docker/cp-test_multinode-603165-m02_multinode-603165-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp testdata/cp-test.txt multinode-603165-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1928602203/001/cp-test_multinode-603165-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165-m03:/home/docker/cp-test.txt multinode-603165:/home/docker/cp-test_multinode-603165-m03_multinode-603165.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165 "sudo cat /home/docker/cp-test_multinode-603165-m03_multinode-603165.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 cp multinode-603165-m03:/home/docker/cp-test.txt multinode-603165-m02:/home/docker/cp-test_multinode-603165-m03_multinode-603165-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 ssh -n multinode-603165-m02 "sudo cat /home/docker/cp-test_multinode-603165-m03_multinode-603165-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-603165 node stop m03: (3.084517136s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-603165 status: exit status 7 (445.695096ms)

                                                
                                                
-- stdout --
	multinode-603165
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-603165-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-603165-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-603165 status --alsologtostderr: exit status 7 (452.154633ms)

                                                
                                                
-- stdout --
	multinode-603165
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-603165-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-603165-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 18:44:15.557946  148475 out.go:296] Setting OutFile to fd 1 ...
	I1002 18:44:15.558230  148475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:44:15.558248  148475 out.go:309] Setting ErrFile to fd 2...
	I1002 18:44:15.558257  148475 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:44:15.558530  148475 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
	I1002 18:44:15.558758  148475 out.go:303] Setting JSON to false
	I1002 18:44:15.558796  148475 mustload.go:65] Loading cluster: multinode-603165
	I1002 18:44:15.558914  148475 notify.go:220] Checking for updates...
	I1002 18:44:15.559274  148475 config.go:182] Loaded profile config "multinode-603165": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 18:44:15.559294  148475 status.go:255] checking status of multinode-603165 ...
	I1002 18:44:15.559814  148475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:44:15.559867  148475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:44:15.579603  148475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37309
	I1002 18:44:15.580115  148475 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:44:15.580613  148475 main.go:141] libmachine: Using API Version  1
	I1002 18:44:15.580629  148475 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:44:15.581030  148475 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:44:15.581260  148475 main.go:141] libmachine: (multinode-603165) Calling .GetState
	I1002 18:44:15.583093  148475 status.go:330] multinode-603165 host status = "Running" (err=<nil>)
	I1002 18:44:15.583179  148475 host.go:66] Checking if "multinode-603165" exists ...
	I1002 18:44:15.583997  148475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:44:15.584037  148475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:44:15.598715  148475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37277
	I1002 18:44:15.599118  148475 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:44:15.599578  148475 main.go:141] libmachine: Using API Version  1
	I1002 18:44:15.599598  148475 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:44:15.600050  148475 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:44:15.600297  148475 main.go:141] libmachine: (multinode-603165) Calling .GetIP
	I1002 18:44:15.603083  148475 main.go:141] libmachine: (multinode-603165) DBG | domain multinode-603165 has defined MAC address 52:54:00:c6:34:c7 in network mk-multinode-603165
	I1002 18:44:15.603461  148475 main.go:141] libmachine: (multinode-603165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:34:c7", ip: ""} in network mk-multinode-603165: {Iface:virbr1 ExpiryTime:2023-10-02 19:41:15 +0000 UTC Type:0 Mac:52:54:00:c6:34:c7 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-603165 Clientid:01:52:54:00:c6:34:c7}
	I1002 18:44:15.603524  148475 main.go:141] libmachine: (multinode-603165) DBG | domain multinode-603165 has defined IP address 192.168.39.86 and MAC address 52:54:00:c6:34:c7 in network mk-multinode-603165
	I1002 18:44:15.603595  148475 host.go:66] Checking if "multinode-603165" exists ...
	I1002 18:44:15.603941  148475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:44:15.603982  148475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:44:15.618853  148475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44133
	I1002 18:44:15.619271  148475 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:44:15.619750  148475 main.go:141] libmachine: Using API Version  1
	I1002 18:44:15.619779  148475 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:44:15.620132  148475 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:44:15.620755  148475 main.go:141] libmachine: (multinode-603165) Calling .DriverName
	I1002 18:44:15.621123  148475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 18:44:15.621191  148475 main.go:141] libmachine: (multinode-603165) Calling .GetSSHHostname
	I1002 18:44:15.624539  148475 main.go:141] libmachine: (multinode-603165) DBG | domain multinode-603165 has defined MAC address 52:54:00:c6:34:c7 in network mk-multinode-603165
	I1002 18:44:15.624936  148475 main.go:141] libmachine: (multinode-603165) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:34:c7", ip: ""} in network mk-multinode-603165: {Iface:virbr1 ExpiryTime:2023-10-02 19:41:15 +0000 UTC Type:0 Mac:52:54:00:c6:34:c7 Iaid: IPaddr:192.168.39.86 Prefix:24 Hostname:multinode-603165 Clientid:01:52:54:00:c6:34:c7}
	I1002 18:44:15.624969  148475 main.go:141] libmachine: (multinode-603165) DBG | domain multinode-603165 has defined IP address 192.168.39.86 and MAC address 52:54:00:c6:34:c7 in network mk-multinode-603165
	I1002 18:44:15.625114  148475 main.go:141] libmachine: (multinode-603165) Calling .GetSSHPort
	I1002 18:44:15.625300  148475 main.go:141] libmachine: (multinode-603165) Calling .GetSSHKeyPath
	I1002 18:44:15.625487  148475 main.go:141] libmachine: (multinode-603165) Calling .GetSSHUsername
	I1002 18:44:15.625725  148475 sshutil.go:53] new ssh client: &{IP:192.168.39.86 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/multinode-603165/id_rsa Username:docker}
	I1002 18:44:15.721784  148475 ssh_runner.go:195] Run: systemctl --version
	I1002 18:44:15.727786  148475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 18:44:15.741495  148475 kubeconfig.go:92] found "multinode-603165" server: "https://192.168.39.86:8443"
	I1002 18:44:15.741525  148475 api_server.go:166] Checking apiserver status ...
	I1002 18:44:15.741561  148475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1002 18:44:15.753418  148475 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1897/cgroup
	I1002 18:44:15.764355  148475 api_server.go:182] apiserver freezer: "4:freezer:/kubepods/burstable/pod8555a29b874e5ae8abd28c46d3272286/32527e3f87a47ce51ced299f3945bb54c685262c9bbb4046fb8ef37e9ebe096c"
	I1002 18:44:15.764412  148475 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8555a29b874e5ae8abd28c46d3272286/32527e3f87a47ce51ced299f3945bb54c685262c9bbb4046fb8ef37e9ebe096c/freezer.state
	I1002 18:44:15.773216  148475 api_server.go:204] freezer state: "THAWED"
	I1002 18:44:15.773252  148475 api_server.go:253] Checking apiserver healthz at https://192.168.39.86:8443/healthz ...
	I1002 18:44:15.780001  148475 api_server.go:279] https://192.168.39.86:8443/healthz returned 200:
	ok
	I1002 18:44:15.780042  148475 status.go:421] multinode-603165 apiserver status = Running (err=<nil>)
	I1002 18:44:15.780055  148475 status.go:257] multinode-603165 status: &{Name:multinode-603165 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 18:44:15.780081  148475 status.go:255] checking status of multinode-603165-m02 ...
	I1002 18:44:15.780598  148475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:44:15.780667  148475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:44:15.797487  148475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41767
	I1002 18:44:15.797943  148475 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:44:15.798447  148475 main.go:141] libmachine: Using API Version  1
	I1002 18:44:15.798478  148475 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:44:15.798844  148475 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:44:15.799056  148475 main.go:141] libmachine: (multinode-603165-m02) Calling .GetState
	I1002 18:44:15.800827  148475 status.go:330] multinode-603165-m02 host status = "Running" (err=<nil>)
	I1002 18:44:15.800847  148475 host.go:66] Checking if "multinode-603165-m02" exists ...
	I1002 18:44:15.801146  148475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:44:15.801206  148475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:44:15.816367  148475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41905
	I1002 18:44:15.816911  148475 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:44:15.817424  148475 main.go:141] libmachine: Using API Version  1
	I1002 18:44:15.817449  148475 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:44:15.817853  148475 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:44:15.818034  148475 main.go:141] libmachine: (multinode-603165-m02) Calling .GetIP
	I1002 18:44:15.821006  148475 main.go:141] libmachine: (multinode-603165-m02) DBG | domain multinode-603165-m02 has defined MAC address 52:54:00:b2:28:d7 in network mk-multinode-603165
	I1002 18:44:15.821455  148475 main.go:141] libmachine: (multinode-603165-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:28:d7", ip: ""} in network mk-multinode-603165: {Iface:virbr1 ExpiryTime:2023-10-02 19:42:33 +0000 UTC Type:0 Mac:52:54:00:b2:28:d7 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:multinode-603165-m02 Clientid:01:52:54:00:b2:28:d7}
	I1002 18:44:15.821492  148475 main.go:141] libmachine: (multinode-603165-m02) DBG | domain multinode-603165-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:b2:28:d7 in network mk-multinode-603165
	I1002 18:44:15.821635  148475 host.go:66] Checking if "multinode-603165-m02" exists ...
	I1002 18:44:15.821921  148475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:44:15.821960  148475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:44:15.839165  148475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37137
	I1002 18:44:15.839625  148475 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:44:15.840070  148475 main.go:141] libmachine: Using API Version  1
	I1002 18:44:15.840097  148475 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:44:15.840425  148475 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:44:15.840582  148475 main.go:141] libmachine: (multinode-603165-m02) Calling .DriverName
	I1002 18:44:15.840732  148475 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1002 18:44:15.840757  148475 main.go:141] libmachine: (multinode-603165-m02) Calling .GetSSHHostname
	I1002 18:44:15.844079  148475 main.go:141] libmachine: (multinode-603165-m02) DBG | domain multinode-603165-m02 has defined MAC address 52:54:00:b2:28:d7 in network mk-multinode-603165
	I1002 18:44:15.844533  148475 main.go:141] libmachine: (multinode-603165-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b2:28:d7", ip: ""} in network mk-multinode-603165: {Iface:virbr1 ExpiryTime:2023-10-02 19:42:33 +0000 UTC Type:0 Mac:52:54:00:b2:28:d7 Iaid: IPaddr:192.168.39.149 Prefix:24 Hostname:multinode-603165-m02 Clientid:01:52:54:00:b2:28:d7}
	I1002 18:44:15.844574  148475 main.go:141] libmachine: (multinode-603165-m02) DBG | domain multinode-603165-m02 has defined IP address 192.168.39.149 and MAC address 52:54:00:b2:28:d7 in network mk-multinode-603165
	I1002 18:44:15.844706  148475 main.go:141] libmachine: (multinode-603165-m02) Calling .GetSSHPort
	I1002 18:44:15.844922  148475 main.go:141] libmachine: (multinode-603165-m02) Calling .GetSSHKeyPath
	I1002 18:44:15.845073  148475 main.go:141] libmachine: (multinode-603165-m02) Calling .GetSSHUsername
	I1002 18:44:15.845262  148475 sshutil.go:53] new ssh client: &{IP:192.168.39.149 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17339-126802/.minikube/machines/multinode-603165-m02/id_rsa Username:docker}
	I1002 18:44:15.936386  148475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1002 18:44:15.949134  148475 status.go:257] multinode-603165-m02 status: &{Name:multinode-603165-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1002 18:44:15.949170  148475 status.go:255] checking status of multinode-603165-m03 ...
	I1002 18:44:15.949547  148475 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:44:15.949602  148475 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:44:15.965567  148475 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44727
	I1002 18:44:15.966044  148475 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:44:15.966526  148475 main.go:141] libmachine: Using API Version  1
	I1002 18:44:15.966551  148475 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:44:15.966931  148475 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:44:15.967123  148475 main.go:141] libmachine: (multinode-603165-m03) Calling .GetState
	I1002 18:44:15.968723  148475 status.go:330] multinode-603165-m03 host status = "Stopped" (err=<nil>)
	I1002 18:44:15.968739  148475 status.go:343] host is not running, skipping remaining checks
	I1002 18:44:15.968744  148475 status.go:257] multinode-603165-m03 status: &{Name:multinode-603165-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.98s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-603165 node start m03 --alsologtostderr: (30.452722329s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (180.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-603165
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-603165
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-603165: (28.465465936s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603165 --wait=true -v=8 --alsologtostderr
E1002 18:45:37.628336  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:46:05.314127  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 18:46:40.321992  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:47:15.201594  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-603165 --wait=true -v=8 --alsologtostderr: (2m31.489484148s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-603165
--- PASS: TestMultiNode/serial/RestartKeepsNodes (180.04s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-603165 node delete m03: (1.194213135s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 stop
E1002 18:48:03.370124  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-603165 stop: (25.383990266s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-603165 status: exit status 7 (83.130066ms)

                                                
                                                
-- stdout --
	multinode-603165
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-603165-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-603165 status --alsologtostderr: exit status 7 (81.06873ms)

                                                
                                                
-- stdout --
	multinode-603165
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-603165-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1002 18:48:14.393927  149877 out.go:296] Setting OutFile to fd 1 ...
	I1002 18:48:14.394036  149877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:48:14.394044  149877 out.go:309] Setting ErrFile to fd 2...
	I1002 18:48:14.394048  149877 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1002 18:48:14.394207  149877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17339-126802/.minikube/bin
	I1002 18:48:14.394350  149877 out.go:303] Setting JSON to false
	I1002 18:48:14.394374  149877 mustload.go:65] Loading cluster: multinode-603165
	I1002 18:48:14.394489  149877 notify.go:220] Checking for updates...
	I1002 18:48:14.394724  149877 config.go:182] Loaded profile config "multinode-603165": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.2
	I1002 18:48:14.394739  149877 status.go:255] checking status of multinode-603165 ...
	I1002 18:48:14.395110  149877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:48:14.395175  149877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:48:14.414232  149877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40877
	I1002 18:48:14.414722  149877 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:48:14.415336  149877 main.go:141] libmachine: Using API Version  1
	I1002 18:48:14.415371  149877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:48:14.415797  149877 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:48:14.416040  149877 main.go:141] libmachine: (multinode-603165) Calling .GetState
	I1002 18:48:14.417644  149877 status.go:330] multinode-603165 host status = "Stopped" (err=<nil>)
	I1002 18:48:14.417663  149877 status.go:343] host is not running, skipping remaining checks
	I1002 18:48:14.417671  149877 status.go:257] multinode-603165 status: &{Name:multinode-603165 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1002 18:48:14.417700  149877 status.go:255] checking status of multinode-603165-m02 ...
	I1002 18:48:14.418163  149877 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1002 18:48:14.418209  149877 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1002 18:48:14.433331  149877 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37795
	I1002 18:48:14.433745  149877 main.go:141] libmachine: () Calling .GetVersion
	I1002 18:48:14.434205  149877 main.go:141] libmachine: Using API Version  1
	I1002 18:48:14.434235  149877 main.go:141] libmachine: () Calling .SetConfigRaw
	I1002 18:48:14.434550  149877 main.go:141] libmachine: () Calling .GetMachineName
	I1002 18:48:14.434708  149877 main.go:141] libmachine: (multinode-603165-m02) Calling .GetState
	I1002 18:48:14.436469  149877 status.go:330] multinode-603165-m02 host status = "Stopped" (err=<nil>)
	I1002 18:48:14.436484  149877 status.go:343] host is not running, skipping remaining checks
	I1002 18:48:14.436492  149877 status.go:257] multinode-603165-m02 status: &{Name:multinode-603165-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (118.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603165 --wait=true -v=8 --alsologtostderr --driver=kvm2 
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-603165 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m58.144816667s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603165 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (118.70s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-603165
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603165-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-603165-m02 --driver=kvm2 : exit status 14 (61.68724ms)

                                                
                                                
-- stdout --
	* [multinode-603165-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-603165-m02' is duplicated with machine name 'multinode-603165-m02' in profile 'multinode-603165'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603165-m03 --driver=kvm2 
E1002 18:50:37.628174  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-603165-m03 --driver=kvm2 : (52.135612961s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-603165
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-603165: exit status 80 (220.873755ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-603165
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-603165-m03 already exists in multinode-603165-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-603165-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-603165-m03: (1.008091637s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.47s)

                                                
                                    
x
+
TestPreload (178.35s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-724166 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1002 18:51:40.321983  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:52:15.202257  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-724166 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m25.19195733s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-724166 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-724166 image pull gcr.io/k8s-minikube/busybox: (2.032454674s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-724166
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-724166: (13.102845731s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-724166 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1002 18:53:38.247878  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-724166 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m16.771123707s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-724166 image list
helpers_test.go:175: Cleaning up "test-preload-724166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-724166
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-724166: (1.054692696s)
--- PASS: TestPreload (178.35s)

                                                
                                    
x
+
TestScheduledStopUnix (122.33s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-112130 --memory=2048 --driver=kvm2 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-112130 --memory=2048 --driver=kvm2 : (50.752729237s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-112130 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-112130 -n scheduled-stop-112130
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-112130 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-112130 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-112130 -n scheduled-stop-112130
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-112130
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-112130 --schedule 15s
E1002 18:55:37.629482  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-112130
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-112130: exit status 7 (56.746397ms)

                                                
                                                
-- stdout --
	scheduled-stop-112130
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-112130 -n scheduled-stop-112130
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-112130 -n scheduled-stop-112130: exit status 7 (61.130327ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-112130" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-112130
--- PASS: TestScheduledStopUnix (122.33s)

                                                
                                    
x
+
TestSkaffold (141.83s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe173831628 version
skaffold_test.go:63: skaffold version: v2.7.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-532511 --memory=2600 --driver=kvm2 
E1002 18:56:40.322010  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 18:57:00.674723  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-532511 --memory=2600 --driver=kvm2 : (49.260578823s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe173831628 run --minikube-profile skaffold-532511 --kube-context skaffold-532511 --status-check=true --port-forward=false --interactive=false
E1002 18:57:15.202181  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe173831628 run --minikube-profile skaffold-532511 --kube-context skaffold-532511 --status-check=true --port-forward=false --interactive=false: (1m18.603042597s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-d94585656-97527" [d4969b64-4d8b-4527-8276-0f2ba2fd1835] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.020118794s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-69654765d9-fhpml" [aac1a326-77a9-4b03-9302-b2b17b6208a6] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009821497s
helpers_test.go:175: Cleaning up "skaffold-532511" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-532511
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-532511: (1.167521339s)
--- PASS: TestSkaffold (141.83s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (238.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1770561329.exe start -p running-upgrade-285041 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1770561329.exe start -p running-upgrade-285041 --memory=2200 --vm-driver=kvm2 : (2m14.057700758s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-285041 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1002 19:03:29.853362  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:03:40.093782  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-285041 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m41.540940149s)
helpers_test.go:175: Cleaning up "running-upgrade-285041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-285041
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-285041: (1.496868318s)
--- PASS: TestRunningBinaryUpgrade (238.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (1146.86s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.1194506970.exe start -p stopped-upgrade-817564 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.1194506970.exe start -p stopped-upgrade-817564 --memory=2200 --vm-driver=kvm2 : (17m53.6492339s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.1194506970.exe -p stopped-upgrade-817564 stop
E1002 19:21:19.277931  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.1194506970.exe -p stopped-upgrade-817564 stop: (13.247334863s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-817564 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-817564 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (59.967821503s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (1146.86s)

                                                
                                    
x
+
TestPause/serial/Start (79.86s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-315274 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
E1002 19:04:00.574050  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-315274 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m19.861728143s)
--- PASS: TestPause/serial/Start (79.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-308183 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-308183 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (83.546396ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-308183] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17339
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17339-126802/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17339-126802/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (63.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-308183 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-308183 --driver=kvm2 : (1m3.341328598s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-308183 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (63.74s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (52.13s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-315274 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-315274 --alsologtostderr -v=1 --driver=kvm2 : (52.105977599s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (52.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1002 19:05:37.628244  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 19:06:03.455225  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m17.564826453s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.56s)

                                                
                                    
x
+
TestPause/serial/Pause (2.42s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-315274 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-315274 --alsologtostderr -v=5: (2.418194021s)
--- PASS: TestPause/serial/Pause (2.42s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-315274 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-315274 --output=json --layout=cluster: exit status 2 (400.208552ms)

                                                
                                                
-- stdout --
	{"Name":"pause-315274","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.31.2","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-315274","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.40s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-315274 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.78s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-308183 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-308183 --no-kubernetes --driver=kvm2 : (12.101004128s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-308183 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-308183 status -o json: exit status 2 (239.741755ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-308183","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-308183
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-308183: (1.200646114s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.54s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.07s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-315274 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-315274 --alsologtostderr -v=5: (1.067487959s)
--- PASS: TestPause/serial/PauseAgain (1.07s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.14s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-315274 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-315274 --alsologtostderr -v=5: (1.136612541s)
--- PASS: TestPause/serial/DeletePaused (1.14s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m19.380959369s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-308183 --no-kubernetes --driver=kvm2 
E1002 19:06:31.216364  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:31.221694  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:31.232070  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:31.252404  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:31.292711  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:31.373039  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:31.533513  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:31.854447  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:32.495496  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:33.775928  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:36.336897  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:06:40.322339  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 19:06:41.457809  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-308183 --no-kubernetes --driver=kvm2 : (48.981425526s)
--- PASS: TestNoKubernetes/serial/Start (48.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (16.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-520845 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l5shp" [093eaf3e-2300-4293-81f0-0595235a9a7c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 19:06:51.698187  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-l5shp" [093eaf3e-2300-4293-81f0-0595235a9a7c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 16.01255225s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (16.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-308183 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-308183 "sudo systemctl is-active --quiet service kubelet": exit status 1 (220.159493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-308183
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-308183: (2.157919893s)
--- PASS: TestNoKubernetes/serial/Stop (2.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (26.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-308183 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-308183 --driver=kvm2 : (26.618459196s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (26.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (124.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m4.84892658s)
--- PASS: TestNetworkPlugins/group/calico/Start (124.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qvwnh" [893386f7-00e2-447e-bc1d-c058efccfb94] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.022831884s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-520845 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mmf2c" [90126feb-a7ed-4768-956b-1f4625384839] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mmf2c" [90126feb-a7ed-4768-956b-1f4625384839] Running
E1002 19:07:53.139829  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.010986639s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-308183 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-308183 "sudo systemctl is-active --quiet service kubelet": exit status 1 (232.901367ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (104.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m44.912047199s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (104.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (97.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E1002 19:08:19.612706  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:08:47.296337  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:09:15.061064  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m37.923738389s)
--- PASS: TestNetworkPlugins/group/false/Start (97.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9sx2x" [ad4839a0-6ac5-4898-a165-082cb371590d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.051089711s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-520845 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6d7vp" [d552735f-e233-40ae-aaa9-8b882a8daa4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6d7vp" [d552735f-e233-40ae-aaa9-8b882a8daa4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.017340074s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-520845 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6kxp2" [f7e0d983-f8d8-4c5c-8096-306ae8c933b3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6kxp2" [f7e0d983-f8d8-4c5c-8096-306ae8c933b3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.012396949s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-520845 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nntwb" [d5cbfff3-94bf-42a6-a897-0c5f19f4944d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nntwb" [d5cbfff3-94bf-42a6-a897-0c5f19f4944d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.011941569s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m14.896295628s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (109.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m49.669829109s)
--- PASS: TestNetworkPlugins/group/flannel/Start (109.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (113.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1002 19:10:18.248803  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 19:10:37.629083  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m53.253805078s)
--- PASS: TestNetworkPlugins/group/bridge/Start (113.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-520845 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-cz74m" [4316637d-0392-4df3-8ef5-8b2bdc6ceb1d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-cz74m" [4316637d-0392-4df3-8ef5-8b2bdc6ceb1d] Running
E1002 19:11:31.216353  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.013299181s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (73.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
E1002 19:11:49.357366  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:49.363583  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:49.374714  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:49.395583  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:49.435926  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:49.516973  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:49.678033  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:49.998331  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:50.638535  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:51.918958  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-520845 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m13.214035443s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (73.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-9t999" [5474ce2e-0bc5-428a-8691-f5dd24c810e5] Running
E1002 19:11:54.479491  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:11:58.902057  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019967518s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-520845 replace --force -f testdata/netcat-deployment.yaml
E1002 19:11:59.599749  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-wdpc8" [26cc9577-326f-42be-8faf-5e329862862f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-wdpc8" [26cc9577-326f-42be-8faf-5e329862862f] Running
E1002 19:12:09.840117  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.013010008s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-520845 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bk67g" [b5966695-1c20-4f9c-ab42-33be55d38cf6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-bk67g" [b5966695-1c20-4f9c-ab42-33be55d38cf6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.02389236s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-695840 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1002 19:12:35.645080  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:35.650301  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:35.660642  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:35.680843  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:35.720969  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:35.801306  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:35.961881  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:36.283062  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:36.924274  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:38.205300  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:40.766280  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-695840 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m16.503881951s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (104.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-680492 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
E1002 19:12:45.886650  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:12:56.127606  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-680492 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (1m44.198323363s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (104.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-520845 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (15.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-520845 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xwqth" [65272701-9912-44a0-9109-3d552166121a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1002 19:13:11.281900  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-xwqth" [65272701-9912-44a0-9109-3d552166121a] Running
E1002 19:13:16.608695  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 15.014202415s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (15.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-520845 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-520845 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)
E1002 19:22:35.644886  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (78.73s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-153772 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
E1002 19:13:40.675543  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 19:13:57.569121  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:14:27.167602  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:27.172948  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:27.183282  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:27.203654  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:27.244029  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:27.324523  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-153772 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (1m18.726934239s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (78.73s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-680492 create -f testdata/busybox.yaml
E1002 19:14:27.484927  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [19b62605-7910-4464-bfc1-96fbd6140301] Pending
helpers_test.go:344: "busybox" [19b62605-7910-4464-bfc1-96fbd6140301] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 19:14:27.806063  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:28.446595  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:29.727390  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:31.935105  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:31.940432  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:31.950818  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:31.971156  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:32.011517  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:32.092180  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:32.252784  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:32.288005  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:14:32.573162  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
helpers_test.go:344: "busybox" [19b62605-7910-4464-bfc1-96fbd6140301] Running
E1002 19:14:33.202859  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:14:33.214077  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:34.495016  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:37.055547  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:14:37.409086  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.03145472s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-680492 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-680492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-680492 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.136136058s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-680492 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-680492 --alsologtostderr -v=3
E1002 19:14:42.175862  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-680492 --alsologtostderr -v=3: (13.129591779s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-695840 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E1002 19:14:47.650156  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
helpers_test.go:344: "busybox" [72e1afbd-359d-4274-8b38-7b39f66550c6] Pending
helpers_test.go:344: "busybox" [72e1afbd-359d-4274-8b38-7b39f66550c6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 19:14:49.053702  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:14:49.059053  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:14:49.069394  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:14:49.089773  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:14:49.130154  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:14:49.210751  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:14:49.371470  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:14:49.691740  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:14:50.332852  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
helpers_test.go:344: "busybox" [72e1afbd-359d-4274-8b38-7b39f66550c6] Running
E1002 19:14:51.613545  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.042978489s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-695840 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680492 -n no-preload-680492
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680492 -n no-preload-680492: exit status 7 (69.550627ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-680492 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1002 19:14:52.416805  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (306.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-680492 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2
E1002 19:14:54.173783  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-680492 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.2: (5m6.229797649s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-680492 -n no-preload-680492
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (306.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-153772 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3c42b22d-c466-44e3-a4b1-77280219c78f] Pending
helpers_test.go:344: "busybox" [3c42b22d-c466-44e3-a4b1-77280219c78f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3c42b22d-c466-44e3-a4b1-77280219c78f] Running
E1002 19:14:59.294167  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.031704951s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-153772 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-695840 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-695840 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-695840 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-695840 --alsologtostderr -v=3: (13.135319935s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-153772 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-153772 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.085169001s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-153772 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-153772 --alsologtostderr -v=3
E1002 19:15:08.130359  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:15:09.535031  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-153772 --alsologtostderr -v=3: (13.125059024s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-695840 -n old-k8s-version-695840
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-695840 -n old-k8s-version-695840: exit status 7 (61.757099ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-695840 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (440.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-695840 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1002 19:15:12.897072  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-695840 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m19.961707201s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-695840 -n old-k8s-version-695840
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (440.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153772 -n embed-certs-153772
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153772 -n embed-certs-153772: exit status 7 (69.508013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-153772 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (365.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-153772 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2
E1002 19:15:19.490232  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:15:30.016304  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:15:37.629020  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 19:15:49.091072  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:15:53.857749  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:16:10.977405  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:16:19.277995  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:19.283340  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:19.293639  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:19.313985  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:19.354368  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:19.434783  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:19.595159  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:19.915913  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:20.556936  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:21.838013  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:24.398352  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:29.519577  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:31.216063  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
E1002 19:16:39.760739  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:16:40.322603  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
E1002 19:16:49.358049  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:16:54.115774  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:54.121091  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:54.131395  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:54.151783  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:54.192140  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:54.272489  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:54.432921  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:54.753147  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:55.394049  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:56.674719  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:16:59.235940  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:17:00.241362  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:17:04.356350  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:17:11.012247  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:17:11.924614  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:11.929938  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:11.940427  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:11.960977  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:12.001343  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:12.081713  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:12.242181  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:12.562849  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:13.203770  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:14.483949  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:14.597289  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:17:15.200883  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 19:17:15.778634  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:17:17.043826  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
E1002 19:17:17.044870  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:22.165445  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:32.406204  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:17:32.898065  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:17:35.078008  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:17:35.644260  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:17:41.201805  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:17:52.887155  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:18:03.243120  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:03.248440  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:03.258880  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:03.279243  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:03.319653  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:03.330933  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kindnet-520845/client.crt: no such file or directory
E1002 19:18:03.400732  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:03.561228  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:03.881958  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:04.522120  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:05.803226  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:08.364275  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:13.485309  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:16.038374  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:18:19.613227  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:18:23.725900  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:18:33.847967  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:18:44.206509  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:19:03.123048  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
E1002 19:19:25.167596  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
E1002 19:19:27.167439  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:19:31.935118  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:19:37.958738  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
E1002 19:19:42.656957  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/skaffold-532511/client.crt: no such file or directory
E1002 19:19:49.054454  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:19:54.853441  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:19:55.768756  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-153772 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.2: (6m4.616796379s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-153772 -n embed-certs-153772
E1002 19:21:23.371599  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (365.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2c7jq" [25be1ab0-9d03-4b5b-9d17-14bdc76f588d] Running
E1002 19:19:59.619421  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019268876s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-2c7jq" [25be1ab0-9d03-4b5b-9d17-14bdc76f588d] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010681258s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-680492 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-680492 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-680492 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680492 -n no-preload-680492
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680492 -n no-preload-680492: exit status 2 (257.773289ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-680492 -n no-preload-680492
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-680492 -n no-preload-680492: exit status 2 (266.280554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-680492 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-680492 -n no-preload-680492
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-680492 -n no-preload-680492
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-075364 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
E1002 19:20:16.738613  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/false-520845/client.crt: no such file or directory
E1002 19:20:37.628884  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/ingress-addon-legacy-056933/client.crt: no such file or directory
E1002 19:20:47.088015  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-075364 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (1m16.684413235s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (76.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (24.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kq6r2" [2b54a4aa-bff8-440a-b537-1cdf568a18f8] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kq6r2" [2b54a4aa-bff8-440a-b537-1cdf568a18f8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 24.022993391s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (24.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-075364 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d3233897-fc93-4924-80a2-50dead02137d] Pending
helpers_test.go:344: "busybox" [d3233897-fc93-4924-80a2-50dead02137d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1002 19:21:31.216393  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/gvisor-566865/client.crt: no such file or directory
helpers_test.go:344: "busybox" [d3233897-fc93-4924-80a2-50dead02137d] Running
E1002 19:21:40.322759  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/addons-376551/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 11.028371296s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-075364 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (11.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-075364 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-075364 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.155308703s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-075364 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-075364 --alsologtostderr -v=3
E1002 19:21:46.964174  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/enable-default-cni-520845/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-075364 --alsologtostderr -v=3: (13.143505431s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-kq6r2" [2b54a4aa-bff8-440a-b537-1cdf568a18f8] Running
E1002 19:21:49.357915  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012515837s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-153772 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-153772 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-153772 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153772 -n embed-certs-153772
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153772 -n embed-certs-153772: exit status 2 (267.427102ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-153772 -n embed-certs-153772
E1002 19:21:54.115213  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-153772 -n embed-certs-153772: exit status 2 (266.845141ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-153772 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-153772 -n embed-certs-153772
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-153772 -n embed-certs-153772
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364: exit status 7 (70.822356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-075364 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (349.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-075364 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-075364 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.2: (5m49.468539584s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (349.80s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (107.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-962509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E1002 19:22:11.923816  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
E1002 19:22:15.200823  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/functional-720299/client.crt: no such file or directory
E1002 19:22:21.799533  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/flannel-520845/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-962509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (1m47.807853619s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (107.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7j8g9" [21bbe5be-45d4-47e5-a886-f9b8c6e63ebf] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019256289s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-817564
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-817564: (1.411085791s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7j8g9" [21bbe5be-45d4-47e5-a886-f9b8c6e63ebf] Running
E1002 19:22:39.609665  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/bridge-520845/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01243489s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-695840 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-695840 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-695840 -n old-k8s-version-695840
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-695840 -n old-k8s-version-695840: exit status 2 (256.004221ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-695840 -n old-k8s-version-695840
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-695840 -n old-k8s-version-695840: exit status 2 (268.066699ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-695840 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-695840 -n old-k8s-version-695840
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-695840 -n old-k8s-version-695840
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-962509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-962509 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.032746373s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-962509 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-962509 --alsologtostderr -v=3: (13.114378988s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-962509 -n newest-cni-962509
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-962509 -n newest-cni-962509: exit status 7 (63.908453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-962509 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-962509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2
E1002 19:24:27.167426  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/calico-520845/client.crt: no such file or directory
E1002 19:24:27.705470  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:27.710796  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:27.721178  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:27.741577  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:27.782003  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:27.862434  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:28.023396  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:28.343757  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:28.983990  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:30.264398  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:31.934791  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/custom-flannel-520845/client.crt: no such file or directory
E1002 19:24:32.825401  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
E1002 19:24:37.945742  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/no-preload-680492/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-962509 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.2: (45.019522317s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-962509 -n newest-cni-962509
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-962509 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-962509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-962509 -n newest-cni-962509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-962509 -n newest-cni-962509: exit status 2 (246.112881ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-962509 -n newest-cni-962509
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-962509 -n newest-cni-962509: exit status 2 (254.650778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-962509 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-962509 -n newest-cni-962509
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-962509 -n newest-cni-962509
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (21.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sp279" [85437108-4d2f-4a8f-9fc7-78c365cb8b9e] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sp279" [85437108-4d2f-4a8f-9fc7-78c365cb8b9e] Running
E1002 19:28:03.243358  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/kubenet-520845/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 21.019261808s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (21.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-sp279" [85437108-4d2f-4a8f-9fc7-78c365cb8b9e] Running
E1002 19:28:12.404055  134025 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17339-126802/.minikube/profiles/auto-520845/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.013279019s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-075364 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-075364 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-075364 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364: exit status 2 (249.843043ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364: exit status 2 (257.766055ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-075364 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-075364 -n default-k8s-diff-port-075364
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.48s)

                                                
                                    

Test skip (30/313)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.2/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-520845 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-520845" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-520845

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-520845" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-520845"

                                                
                                                
----------------------- debugLogs end: cilium-520845 [took: 3.10689421s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-520845" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-520845
--- SKIP: TestNetworkPlugins/group/cilium (3.24s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-475939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-475939
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard