Test Report: KVM_Linux 17634

                    
                      6a47c51e356b14dff44e127278d7e2190d030982:2023-11-17:31915
                    
                

Test fail (7/321)

x
+
TestErrorSpam/setup (24.7s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-725106 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-725106 --driver=kvm2 
error_spam_test.go:81: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p nospam-725106 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-725106 --driver=kvm2 : exit status 90 (24.695794674s)

                                                
                                                
-- stdout --
	* [nospam-725106] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting control plane node nospam-725106 in cluster nospam-725106
	* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:83: "out/minikube-linux-amd64 start -p nospam-725106 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-725106 --driver=kvm2 " failed: exit status 90
error_spam_test.go:96: unexpected stderr: "X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1"
error_spam_test.go:96: unexpected stderr: "stdout:"
error_spam_test.go:96: unexpected stderr: "stderr:"
error_spam_test.go:96: unexpected stderr: "Job failed. See \"journalctl -xe\" for details."
error_spam_test.go:96: unexpected stderr: "* "
error_spam_test.go:96: unexpected stderr: "╭─────────────────────────────────────────────────────────────────────────────────────────────╮"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * If the above advice does not help, please let us know:                                 │"
error_spam_test.go:96: unexpected stderr: "│      https://github.com/kubernetes/minikube/issues/new/choose                               │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │"
error_spam_test.go:96: unexpected stderr: "│                                                                                             │"
error_spam_test.go:96: unexpected stderr: "╰─────────────────────────────────────────────────────────────────────────────────────────────╯"
error_spam_test.go:110: minikube stdout:
* [nospam-725106] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
- MINIKUBE_LOCATION=17634
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
- MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
- MINIKUBE_BIN=out/minikube-linux-amd64
- MINIKUBE_FORCE_SYSTEMD=
* Using the kvm2 driver based on user configuration
* Starting control plane node nospam-725106 in cluster nospam-725106
* Creating kvm2 VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...

                                                
                                                

                                                
                                                
error_spam_test.go:111: minikube stderr:
X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
stdout:

                                                
                                                
stderr:
Job failed. See "journalctl -xe" for details.

                                                
                                                
* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
error_spam_test.go:121: missing kubeadm init sub-step "Generating certificates and keys ..."
error_spam_test.go:121: missing kubeadm init sub-step "Booting up control plane ..."
error_spam_test.go:121: missing kubeadm init sub-step "Configuring RBAC rules ..."
--- FAIL: TestErrorSpam/setup (24.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (38.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-614434 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p no-preload-614434 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: exit status 90 (38.340290248s)

                                                
                                                
-- stdout --
	* [no-preload-614434] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node no-preload-614434 in cluster no-preload-614434
	* Restarting existing kvm2 VM for "no-preload-614434" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:46:56.623250   47390 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:46:56.623427   47390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:46:56.623464   47390 out.go:309] Setting ErrFile to fd 2...
	I1117 16:46:56.623482   47390 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:46:56.623701   47390 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	I1117 16:46:56.624331   47390 out.go:303] Setting JSON to false
	I1117 16:46:56.625322   47390 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5363,"bootTime":1700234254,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 16:46:56.625406   47390 start.go:138] virtualization: kvm guest
	I1117 16:46:56.627691   47390 out.go:177] * [no-preload-614434] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 16:46:56.629350   47390 out.go:177]   - MINIKUBE_LOCATION=17634
	I1117 16:46:56.630709   47390 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 16:46:56.629316   47390 notify.go:220] Checking for updates...
	I1117 16:46:56.633690   47390 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	I1117 16:46:56.635059   47390 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	I1117 16:46:56.636367   47390 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 16:46:56.637699   47390 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1117 16:46:56.639537   47390 config.go:182] Loaded profile config "no-preload-614434": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:46:56.640224   47390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:46:56.640274   47390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:46:56.660989   47390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37737
	I1117 16:46:56.661464   47390 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:46:56.662122   47390 main.go:141] libmachine: Using API Version  1
	I1117 16:46:56.662141   47390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:46:56.663995   47390 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:46:56.664311   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:46:56.664622   47390 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 16:46:56.665039   47390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:46:56.665145   47390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:46:56.684696   47390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46771
	I1117 16:46:56.685340   47390 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:46:56.685946   47390 main.go:141] libmachine: Using API Version  1
	I1117 16:46:56.685964   47390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:46:56.686551   47390 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:46:56.686766   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:46:56.737562   47390 out.go:177] * Using the kvm2 driver based on existing profile
	I1117 16:46:56.739055   47390 start.go:298] selected driver: kvm2
	I1117 16:46:56.739073   47390 start.go:902] validating driver "kvm2" against &{Name:no-preload-614434 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:no-preload-614434 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[]
ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 16:46:56.739223   47390 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1117 16:46:56.740210   47390 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.740299   47390 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17634-9353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 16:46:56.761528   47390 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1117 16:46:56.762045   47390 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 16:46:56.762150   47390 cni.go:84] Creating CNI manager for ""
	I1117 16:46:56.762168   47390 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1117 16:46:56.762200   47390 start_flags.go:323] config:
	{Name:no-preload-614434 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:no-preload-614434 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.191 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpi
ration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 16:46:56.762408   47390 iso.go:125] acquiring lock: {Name:mkfd0387d5051e05351c5f239ccf79a882c64dcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.764316   47390 out.go:177] * Starting control plane node no-preload-614434 in cluster no-preload-614434
	I1117 16:46:56.765883   47390 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1117 16:46:56.766048   47390 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/no-preload-614434/config.json ...
	I1117 16:46:56.766381   47390 cache.go:107] acquiring lock: {Name:mk6c1b2218f3aca7d0a17d854f6c278ab8715449 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.766465   47390 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1117 16:46:56.766475   47390 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 101.537µs
	I1117 16:46:56.766491   47390 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1117 16:46:56.766505   47390 cache.go:107] acquiring lock: {Name:mkdc56155aa58a16c7b2cebc435868d8a766c729 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.766556   47390 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 exists
	I1117 16:46:56.766562   47390 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.28.3" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3" took 59.232µs
	I1117 16:46:56.766571   47390 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.28.3 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.3 succeeded
	I1117 16:46:56.766589   47390 cache.go:107] acquiring lock: {Name:mkf48985573d1b72c5e93e1c7a57852552b23df8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.766618   47390 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 exists
	I1117 16:46:56.766624   47390 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.28.3" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3" took 42.709µs
	I1117 16:46:56.766633   47390 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.28.3 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.3 succeeded
	I1117 16:46:56.766648   47390 cache.go:107] acquiring lock: {Name:mkb6d104d56689fedae2cec771c2f8275a4fc0f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.766688   47390 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 exists
	I1117 16:46:56.766699   47390 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.28.3" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3" took 51.509µs
	I1117 16:46:56.766706   47390 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.28.3 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.3 succeeded
	I1117 16:46:56.766723   47390 cache.go:107] acquiring lock: {Name:mk4d668b72886814f3570e54e5eeed81614deb5c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.766756   47390 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 exists
	I1117 16:46:56.766762   47390 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.28.3" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3" took 46.834µs
	I1117 16:46:56.766770   47390 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.28.3 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.3 succeeded
	I1117 16:46:56.766781   47390 cache.go:107] acquiring lock: {Name:mk47137bd6f9a7eacf438393b2f340768e07200d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.766810   47390 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I1117 16:46:56.766818   47390 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 38.211µs
	I1117 16:46:56.766825   47390 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I1117 16:46:56.766836   47390 cache.go:107] acquiring lock: {Name:mk914192926ea71dfe234ff92afcfdfc16b84121 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.766877   47390 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 exists
	I1117 16:46:56.766884   47390 cache.go:96] cache image "registry.k8s.io/etcd:3.5.9-0" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0" took 49.23µs
	I1117 16:46:56.766892   47390 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.9-0 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 succeeded
	I1117 16:46:56.766903   47390 cache.go:107] acquiring lock: {Name:mk654c5c4eee07328e45369cf80c8539dae0fbdc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:46:56.766933   47390 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 exists
	I1117 16:46:56.766939   47390 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.10.1" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1" took 37.598µs
	I1117 16:46:56.766947   47390 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.10.1 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 succeeded
	I1117 16:46:56.766953   47390 cache.go:87] Successfully saved all images to host disk.
	I1117 16:46:56.767289   47390 start.go:365] acquiring machines lock for no-preload-614434: {Name:mk6ad0795a1bc343dcb7c179b8c56e6ba763a05d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1117 16:47:11.422565   47390 start.go:369] acquired machines lock for "no-preload-614434" in 14.655239577s
	I1117 16:47:11.422630   47390 start.go:96] Skipping create...Using existing machine configuration
	I1117 16:47:11.422641   47390 fix.go:54] fixHost starting: 
	I1117 16:47:11.423074   47390 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:47:11.423109   47390 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:47:11.439641   47390 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34123
	I1117 16:47:11.440126   47390 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:47:11.440672   47390 main.go:141] libmachine: Using API Version  1
	I1117 16:47:11.440696   47390 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:47:11.441030   47390 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:47:11.441231   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:11.441362   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetState
	I1117 16:47:11.443041   47390 fix.go:102] recreateIfNeeded on no-preload-614434: state=Stopped err=<nil>
	I1117 16:47:11.443087   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	W1117 16:47:11.443265   47390 fix.go:128] unexpected machine state, will restart: <nil>
	I1117 16:47:11.445262   47390 out.go:177] * Restarting existing kvm2 VM for "no-preload-614434" ...
	I1117 16:47:11.446650   47390 main.go:141] libmachine: (no-preload-614434) Calling .Start
	I1117 16:47:11.446848   47390 main.go:141] libmachine: (no-preload-614434) Ensuring networks are active...
	I1117 16:47:11.447632   47390 main.go:141] libmachine: (no-preload-614434) Ensuring network default is active
	I1117 16:47:11.447988   47390 main.go:141] libmachine: (no-preload-614434) Ensuring network mk-no-preload-614434 is active
	I1117 16:47:11.448399   47390 main.go:141] libmachine: (no-preload-614434) Getting domain xml...
	I1117 16:47:11.449382   47390 main.go:141] libmachine: (no-preload-614434) Creating domain...
	I1117 16:47:12.783687   47390 main.go:141] libmachine: (no-preload-614434) Waiting to get IP...
	I1117 16:47:12.784660   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:12.785108   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:12.785146   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:12.785047   47535 retry.go:31] will retry after 203.513795ms: waiting for machine to come up
	I1117 16:47:12.990499   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:12.991029   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:12.991061   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:12.990966   47535 retry.go:31] will retry after 279.823867ms: waiting for machine to come up
	I1117 16:47:13.272599   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:13.273253   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:13.273274   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:13.273206   47535 retry.go:31] will retry after 321.804752ms: waiting for machine to come up
	I1117 16:47:13.596809   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:13.597476   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:13.597511   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:13.597425   47535 retry.go:31] will retry after 607.425874ms: waiting for machine to come up
	I1117 16:47:14.206151   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:14.206678   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:14.206711   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:14.206618   47535 retry.go:31] will retry after 475.290136ms: waiting for machine to come up
	I1117 16:47:14.683490   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:14.684162   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:14.684194   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:14.684132   47535 retry.go:31] will retry after 894.091816ms: waiting for machine to come up
	I1117 16:47:15.579874   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:15.580392   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:15.580419   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:15.580349   47535 retry.go:31] will retry after 729.261123ms: waiting for machine to come up
	I1117 16:47:16.311323   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:16.311830   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:16.311859   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:16.311798   47535 retry.go:31] will retry after 979.242324ms: waiting for machine to come up
	I1117 16:47:17.292492   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:17.293017   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:17.293040   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:17.292964   47535 retry.go:31] will retry after 1.776026533s: waiting for machine to come up
	I1117 16:47:19.071428   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:19.071896   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:19.071927   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:19.071847   47535 retry.go:31] will retry after 2.25437324s: waiting for machine to come up
	I1117 16:47:21.327610   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:21.328257   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:21.328290   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:21.328218   47535 retry.go:31] will retry after 1.957959312s: waiting for machine to come up
	I1117 16:47:23.288247   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:23.288839   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:23.288870   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:23.288778   47535 retry.go:31] will retry after 2.343835032s: waiting for machine to come up
	I1117 16:47:25.635374   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:25.635955   47390 main.go:141] libmachine: (no-preload-614434) DBG | unable to find current IP address of domain no-preload-614434 in network mk-no-preload-614434
	I1117 16:47:25.635980   47390 main.go:141] libmachine: (no-preload-614434) DBG | I1117 16:47:25.635914   47535 retry.go:31] will retry after 3.491502922s: waiting for machine to come up
	I1117 16:47:29.128952   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.129589   47390 main.go:141] libmachine: (no-preload-614434) Found IP for machine: 192.168.61.191
	I1117 16:47:29.129614   47390 main.go:141] libmachine: (no-preload-614434) Reserving static IP address...
	I1117 16:47:29.129632   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has current primary IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.130261   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "no-preload-614434", mac: "52:54:00:7b:12:9e", ip: "192.168.61.191"} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:29.130295   47390 main.go:141] libmachine: (no-preload-614434) DBG | skip adding static IP to network mk-no-preload-614434 - found existing host DHCP lease matching {name: "no-preload-614434", mac: "52:54:00:7b:12:9e", ip: "192.168.61.191"}
	I1117 16:47:29.130310   47390 main.go:141] libmachine: (no-preload-614434) Reserved static IP address: 192.168.61.191
	I1117 16:47:29.130325   47390 main.go:141] libmachine: (no-preload-614434) Waiting for SSH to be available...
	I1117 16:47:29.130342   47390 main.go:141] libmachine: (no-preload-614434) DBG | Getting to WaitForSSH function...
	I1117 16:47:29.132684   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.133073   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:29.133107   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.133229   47390 main.go:141] libmachine: (no-preload-614434) DBG | Using SSH client type: external
	I1117 16:47:29.133260   47390 main.go:141] libmachine: (no-preload-614434) DBG | Using SSH private key: /home/jenkins/minikube-integration/17634-9353/.minikube/machines/no-preload-614434/id_rsa (-rw-------)
	I1117 16:47:29.133293   47390 main.go:141] libmachine: (no-preload-614434) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.191 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17634-9353/.minikube/machines/no-preload-614434/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1117 16:47:29.133308   47390 main.go:141] libmachine: (no-preload-614434) DBG | About to run SSH command:
	I1117 16:47:29.133322   47390 main.go:141] libmachine: (no-preload-614434) DBG | exit 0
	I1117 16:47:29.230448   47390 main.go:141] libmachine: (no-preload-614434) DBG | SSH cmd err, output: <nil>: 
	I1117 16:47:29.230860   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetConfigRaw
	I1117 16:47:29.231623   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetIP
	I1117 16:47:29.233800   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.234312   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:29.234337   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.234629   47390 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/no-preload-614434/config.json ...
	I1117 16:47:29.234891   47390 machine.go:88] provisioning docker machine ...
	I1117 16:47:29.234915   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:29.235121   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetMachineName
	I1117 16:47:29.235300   47390 buildroot.go:166] provisioning hostname "no-preload-614434"
	I1117 16:47:29.235323   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetMachineName
	I1117 16:47:29.235494   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:29.238551   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.238871   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:29.238903   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.239113   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:29.239323   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:29.239492   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:29.239675   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:29.239857   47390 main.go:141] libmachine: Using SSH client type: native
	I1117 16:47:29.240420   47390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I1117 16:47:29.240444   47390 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-614434 && echo "no-preload-614434" | sudo tee /etc/hostname
	I1117 16:47:29.382283   47390 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-614434
	
	I1117 16:47:29.382313   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:29.385144   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.385591   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:29.385620   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.385825   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:29.386061   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:29.386258   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:29.386412   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:29.386575   47390 main.go:141] libmachine: Using SSH client type: native
	I1117 16:47:29.386907   47390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I1117 16:47:29.386929   47390 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-614434' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-614434/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-614434' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1117 16:47:29.522930   47390 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1117 16:47:29.522961   47390 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17634-9353/.minikube CaCertPath:/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17634-9353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17634-9353/.minikube}
	I1117 16:47:29.522984   47390 buildroot.go:174] setting up certificates
	I1117 16:47:29.522998   47390 provision.go:83] configureAuth start
	I1117 16:47:29.523014   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetMachineName
	I1117 16:47:29.523341   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetIP
	I1117 16:47:29.526640   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.527042   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:29.527068   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.527250   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:29.530520   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.530997   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:29.531034   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.531317   47390 provision.go:138] copyHostCerts
	I1117 16:47:29.531377   47390 exec_runner.go:144] found /home/jenkins/minikube-integration/17634-9353/.minikube/ca.pem, removing ...
	I1117 16:47:29.531387   47390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17634-9353/.minikube/ca.pem
	I1117 16:47:29.531462   47390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17634-9353/.minikube/ca.pem (1082 bytes)
	I1117 16:47:29.531573   47390 exec_runner.go:144] found /home/jenkins/minikube-integration/17634-9353/.minikube/cert.pem, removing ...
	I1117 16:47:29.531580   47390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17634-9353/.minikube/cert.pem
	I1117 16:47:29.531617   47390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17634-9353/.minikube/cert.pem (1123 bytes)
	I1117 16:47:29.531685   47390 exec_runner.go:144] found /home/jenkins/minikube-integration/17634-9353/.minikube/key.pem, removing ...
	I1117 16:47:29.531690   47390 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17634-9353/.minikube/key.pem
	I1117 16:47:29.531725   47390 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17634-9353/.minikube/key.pem (1679 bytes)
	I1117 16:47:29.531782   47390 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17634-9353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca-key.pem org=jenkins.no-preload-614434 san=[192.168.61.191 192.168.61.191 localhost 127.0.0.1 minikube no-preload-614434]
	I1117 16:47:29.971425   47390 provision.go:172] copyRemoteCerts
	I1117 16:47:29.971485   47390 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1117 16:47:29.971506   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:29.974818   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.975173   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:29.975212   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:29.975435   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:29.975639   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:29.975838   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:29.975997   47390 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/no-preload-614434/id_rsa Username:docker}
	I1117 16:47:30.067693   47390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1117 16:47:30.094630   47390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1117 16:47:30.119525   47390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1117 16:47:30.143596   47390 provision.go:86] duration metric: configureAuth took 620.582483ms
	I1117 16:47:30.143628   47390 buildroot.go:189] setting minikube options for container-runtime
	I1117 16:47:30.143854   47390 config.go:182] Loaded profile config "no-preload-614434": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:47:30.143878   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:30.144138   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:30.147010   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:30.147471   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:30.147511   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:30.147886   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:30.148092   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:30.148311   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:30.148449   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:30.148605   47390 main.go:141] libmachine: Using SSH client type: native
	I1117 16:47:30.148958   47390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I1117 16:47:30.149006   47390 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1117 16:47:30.276377   47390 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1117 16:47:30.276404   47390 buildroot.go:70] root file system type: tmpfs
	I1117 16:47:30.276538   47390 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1117 16:47:30.276563   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:30.279325   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:30.279652   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:30.279688   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:30.279844   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:30.280052   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:30.280209   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:30.280379   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:30.280529   47390 main.go:141] libmachine: Using SSH client type: native
	I1117 16:47:30.280836   47390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I1117 16:47:30.280894   47390 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1117 16:47:30.443823   47390 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1117 16:47:30.443864   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:30.446623   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:30.446916   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:30.446961   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:30.447093   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:30.447307   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:30.447469   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:30.447615   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:30.447766   47390 main.go:141] libmachine: Using SSH client type: native
	I1117 16:47:30.448141   47390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I1117 16:47:30.448168   47390 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1117 16:47:31.452512   47390 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1117 16:47:31.452548   47390 machine.go:91] provisioned docker machine in 2.217640787s
	I1117 16:47:31.452562   47390 start.go:300] post-start starting for "no-preload-614434" (driver="kvm2")
	I1117 16:47:31.452574   47390 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1117 16:47:31.452593   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:31.452911   47390 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1117 16:47:31.452944   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:31.456449   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.456853   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:31.456880   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.457070   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:31.457259   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:31.457398   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:31.457529   47390 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/no-preload-614434/id_rsa Username:docker}
	I1117 16:47:31.543870   47390 ssh_runner.go:195] Run: cat /etc/os-release
	I1117 16:47:31.547643   47390 info.go:137] Remote host: Buildroot 2021.02.12
	I1117 16:47:31.547666   47390 filesync.go:126] Scanning /home/jenkins/minikube-integration/17634-9353/.minikube/addons for local assets ...
	I1117 16:47:31.547742   47390 filesync.go:126] Scanning /home/jenkins/minikube-integration/17634-9353/.minikube/files for local assets ...
	I1117 16:47:31.547856   47390 filesync.go:149] local asset: /home/jenkins/minikube-integration/17634-9353/.minikube/files/etc/ssl/certs/165582.pem -> 165582.pem in /etc/ssl/certs
	I1117 16:47:31.547983   47390 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1117 16:47:31.557507   47390 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/files/etc/ssl/certs/165582.pem --> /etc/ssl/certs/165582.pem (1708 bytes)
	I1117 16:47:31.580043   47390 start.go:303] post-start completed in 127.465531ms
	I1117 16:47:31.580076   47390 fix.go:56] fixHost completed within 20.157428605s
	I1117 16:47:31.580099   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:31.583399   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.583829   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:31.583866   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.584044   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:31.584288   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:31.584434   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:31.584563   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:31.584697   47390 main.go:141] libmachine: Using SSH client type: native
	I1117 16:47:31.585170   47390 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.61.191 22 <nil> <nil>}
	I1117 16:47:31.585186   47390 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1117 16:47:31.706794   47390 main.go:141] libmachine: SSH cmd err, output: <nil>: 1700239651.655278879
	
	I1117 16:47:31.706816   47390 fix.go:206] guest clock: 1700239651.655278879
	I1117 16:47:31.706826   47390 fix.go:219] Guest: 2023-11-17 16:47:31.655278879 +0000 UTC Remote: 2023-11-17 16:47:31.580080299 +0000 UTC m=+35.022012769 (delta=75.19858ms)
	I1117 16:47:31.706848   47390 fix.go:190] guest clock delta is within tolerance: 75.19858ms
	I1117 16:47:31.706855   47390 start.go:83] releasing machines lock for "no-preload-614434", held for 20.284249586s
	I1117 16:47:31.706884   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:31.707139   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetIP
	I1117 16:47:31.710318   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.710718   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:31.710752   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.710875   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:31.711364   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:31.711554   47390 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:31.711658   47390 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1117 16:47:31.711713   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:31.711811   47390 ssh_runner.go:195] Run: cat /version.json
	I1117 16:47:31.711831   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:31.714767   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.715269   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:31.715297   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.715526   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.715555   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:31.715758   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:31.715902   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:31.715937   47390 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:31.715961   47390 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:31.716027   47390 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/no-preload-614434/id_rsa Username:docker}
	I1117 16:47:31.716603   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:31.716786   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:31.716965   47390 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:31.717098   47390 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/no-preload-614434/id_rsa Username:docker}
	I1117 16:47:31.843887   47390 ssh_runner.go:195] Run: systemctl --version
	I1117 16:47:31.849874   47390 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1117 16:47:31.855621   47390 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1117 16:47:31.855690   47390 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1117 16:47:31.870640   47390 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1117 16:47:31.870668   47390 start.go:472] detecting cgroup driver to use...
	I1117 16:47:31.870803   47390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1117 16:47:31.892209   47390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1117 16:47:31.905462   47390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1117 16:47:31.916871   47390 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1117 16:47:31.916939   47390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1117 16:47:31.931516   47390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1117 16:47:31.944533   47390 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1117 16:47:31.957930   47390 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1117 16:47:31.970549   47390 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1117 16:47:31.983695   47390 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1117 16:47:31.996216   47390 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1117 16:47:32.007980   47390 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1117 16:47:32.016608   47390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 16:47:32.126178   47390 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1117 16:47:32.149912   47390 start.go:472] detecting cgroup driver to use...
	I1117 16:47:32.150044   47390 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1117 16:47:32.167822   47390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1117 16:47:32.185044   47390 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1117 16:47:32.207881   47390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1117 16:47:32.220441   47390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1117 16:47:32.233608   47390 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1117 16:47:32.260897   47390 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1117 16:47:32.274448   47390 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1117 16:47:32.295010   47390 ssh_runner.go:195] Run: which cri-dockerd
	I1117 16:47:32.300718   47390 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1117 16:47:32.309488   47390 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1117 16:47:32.328841   47390 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1117 16:47:32.473449   47390 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1117 16:47:32.603256   47390 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1117 16:47:32.603382   47390 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1117 16:47:32.622102   47390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 16:47:32.744565   47390 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1117 16:47:34.311030   47390 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.56642759s)
	I1117 16:47:34.311103   47390 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1117 16:47:34.475644   47390 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1117 16:47:34.612277   47390 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1117 16:47:34.741897   47390 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 16:47:34.862973   47390 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1117 16:47:34.880195   47390 out.go:177] 
	W1117 16:47:34.881710   47390 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Job failed. See "journalctl -xe" for details.
	
	W1117 16:47:34.881726   47390 out.go:239] * 
	* 
	W1117 16:47:34.882623   47390 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 16:47:34.884874   47390 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p no-preload-614434 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434: exit status 6 (277.257825ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:47:35.166455   47882 status.go:415] kubeconfig endpoint: extract IP: "no-preload-614434" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-614434" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (38.63s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-614434" does not exist
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434: exit status 6 (295.059214ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:47:35.462352   47911 status.go:415] kubeconfig endpoint: extract IP: "no-preload-614434" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-614434" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: client config: context "no-preload-614434" does not exist
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-614434 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-614434 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (43.938532ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-614434" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-614434 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434: exit status 6 (258.022479ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:47:35.763793   47948 status.go:415] kubeconfig endpoint: extract IP: "no-preload-614434" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-614434" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-614434 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p no-preload-614434 "sudo crictl images -o json": exit status 1 (2.241295096s)

                                                
                                                
-- stdout --
	FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p no-preload-614434 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0002] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/cri-dockerd.sock": rpc error: code = DeadlineExceeded desc = context deadline exceeded 
start_stop_delete_test.go:304: v1.28.3 images missing (-want +got):
  []string{
- 	"gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"registry.k8s.io/coredns/coredns:v1.10.1",
- 	"registry.k8s.io/etcd:3.5.9-0",
- 	"registry.k8s.io/kube-apiserver:v1.28.3",
- 	"registry.k8s.io/kube-controller-manager:v1.28.3",
- 	"registry.k8s.io/kube-proxy:v1.28.3",
- 	"registry.k8s.io/kube-scheduler:v1.28.3",
- 	"registry.k8s.io/pause:3.9",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434: exit status 6 (254.087873ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:47:38.263349   48008 status.go:415] kubeconfig endpoint: extract IP: "no-preload-614434" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-614434" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (2.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-614434 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-614434 --alsologtostderr -v=1: exit status 80 (1.714871911s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-614434 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:47:38.331183   48038 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:47:38.331306   48038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:47:38.331317   48038 out.go:309] Setting ErrFile to fd 2...
	I1117 16:47:38.331321   48038 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:47:38.331532   48038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	I1117 16:47:38.331820   48038 out.go:303] Setting JSON to false
	I1117 16:47:38.331844   48038 mustload.go:65] Loading cluster: no-preload-614434
	I1117 16:47:38.332275   48038 config.go:182] Loaded profile config "no-preload-614434": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:47:38.332723   48038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:47:38.332773   48038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:47:38.347347   48038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38493
	I1117 16:47:38.347833   48038 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:47:38.348436   48038 main.go:141] libmachine: Using API Version  1
	I1117 16:47:38.348460   48038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:47:38.348793   48038 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:47:38.348967   48038 main.go:141] libmachine: (no-preload-614434) Calling .GetState
	I1117 16:47:38.350532   48038 host.go:66] Checking if "no-preload-614434" exists ...
	I1117 16:47:38.350807   48038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:47:38.350841   48038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:47:38.365048   48038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37911
	I1117 16:47:38.365465   48038 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:47:38.365944   48038 main.go:141] libmachine: Using API Version  1
	I1117 16:47:38.365976   48038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:47:38.366301   48038 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:47:38.366502   48038 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:38.367648   48038 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false)
extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.32.1-1700142131-17634/minikube-v1.32.1-1700142131-17634-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.32.1-1700142131-17634-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: m
axauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string:/home/jenkins:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-614434 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtua
lboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1117 16:47:38.370145   48038 out.go:177] * Pausing node no-preload-614434 ... 
	I1117 16:47:38.371487   48038 host.go:66] Checking if "no-preload-614434" exists ...
	I1117 16:47:38.371797   48038 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:47:38.371838   48038 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:47:38.385671   48038 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45103
	I1117 16:47:38.386035   48038 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:47:38.386458   48038 main.go:141] libmachine: Using API Version  1
	I1117 16:47:38.386478   48038 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:47:38.386809   48038 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:47:38.386960   48038 main.go:141] libmachine: (no-preload-614434) Calling .DriverName
	I1117 16:47:38.387167   48038 ssh_runner.go:195] Run: systemctl --version
	I1117 16:47:38.387188   48038 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHHostname
	I1117 16:47:38.390053   48038 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:38.390445   48038 main.go:141] libmachine: (no-preload-614434) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7b:12:9e", ip: ""} in network mk-no-preload-614434: {Iface:virbr2 ExpiryTime:2023-11-17 17:47:23 +0000 UTC Type:0 Mac:52:54:00:7b:12:9e Iaid: IPaddr:192.168.61.191 Prefix:24 Hostname:no-preload-614434 Clientid:01:52:54:00:7b:12:9e}
	I1117 16:47:38.390477   48038 main.go:141] libmachine: (no-preload-614434) DBG | domain no-preload-614434 has defined IP address 192.168.61.191 and MAC address 52:54:00:7b:12:9e in network mk-no-preload-614434
	I1117 16:47:38.390636   48038 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHPort
	I1117 16:47:38.390815   48038 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHKeyPath
	I1117 16:47:38.390964   48038 main.go:141] libmachine: (no-preload-614434) Calling .GetSSHUsername
	I1117 16:47:38.391123   48038 sshutil.go:53] new ssh client: &{IP:192.168.61.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/no-preload-614434/id_rsa Username:docker}
	I1117 16:47:38.476043   48038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:47:38.487345   48038 pause.go:51] kubelet running: false
	I1117 16:47:38.487404   48038 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1117 16:47:38.499780   48038 retry.go:31] will retry after 307.704807ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1117 16:47:38.808378   48038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:47:38.820924   48038 pause.go:51] kubelet running: false
	I1117 16:47:38.820996   48038 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1117 16:47:38.833838   48038 retry.go:31] will retry after 228.871049ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1117 16:47:39.063316   48038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:47:39.075836   48038 pause.go:51] kubelet running: false
	I1117 16:47:39.075892   48038 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1117 16:47:39.088346   48038 retry.go:31] will retry after 762.07368ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	I1117 16:47:39.851287   48038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:47:39.867753   48038 pause.go:51] kubelet running: false
	I1117 16:47:39.867813   48038 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1117 16:47:39.887107   48038 out.go:177] 
	W1117 16:47:39.888509   48038 out.go:239] X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	W1117 16:47:39.888527   48038 out.go:239] * 
	* 
	W1117 16:47:39.988463   48038 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1117 16:47:39.989914   48038 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:311: out/minikube-linux-amd64 pause -p no-preload-614434 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434: exit status 6 (254.73639ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:47:40.231816   48080 status.go:415] kubeconfig endpoint: extract IP: "no-preload-614434" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-614434" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434: exit status 6 (237.573119ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:47:40.471445   48110 status.go:415] kubeconfig endpoint: extract IP: "no-preload-614434" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "no-preload-614434" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/no-preload/serial/Pause (2.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-055844 "sudo crictl images -o json"
start_stop_delete_test.go:304: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p old-k8s-version-055844 "sudo crictl images -o json": exit status 1 (256.392598ms)

                                                
                                                
-- stdout --
	FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
start_stop_delete_test.go:304: failed to get images inside minikube. args "out/minikube-linux-amd64 ssh -p old-k8s-version-055844 \"sudo crictl images -o json\"": exit status 1
start_stop_delete_test.go:304: failed to decode images json invalid character '\x1b' looking for beginning of value. output:
FATA[0000] validate service connection: validate CRI v1 image API for endpoint "unix:///var/run/dockershim.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.ImageService 
start_stop_delete_test.go:304: v1.16.0 images missing (-want +got):
  []string{
- 	"k8s.gcr.io/coredns:1.6.2",
- 	"k8s.gcr.io/etcd:3.3.15-0",
- 	"k8s.gcr.io/k8s-minikube/storage-provisioner:v5",
- 	"k8s.gcr.io/kube-apiserver:v1.16.0",
- 	"k8s.gcr.io/kube-controller-manager:v1.16.0",
- 	"k8s.gcr.io/kube-proxy:v1.16.0",
- 	"k8s.gcr.io/kube-scheduler:v1.16.0",
- 	"k8s.gcr.io/pause:3.1",
  }
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-055844 -n old-k8s-version-055844
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-055844 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-055844 logs -n 25: (1.354057928s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | systemctl cat kubelet                                |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | journalctl -xeu kubelet --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo cat                           | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo cat                           | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | systemctl status docker --all                        |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | systemctl cat docker                                 |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo cat                           | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | /etc/docker/daemon.json                              |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo docker                        | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | system info                                          |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | systemctl status cri-docker                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | systemctl cat cri-docker                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo cat                           | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo cat                           | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | cri-dockerd --version                                |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC |                     |
	|         | systemctl status containerd                          |                        |         |         |                     |                     |
	|         | --all --full --no-pager                              |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | systemctl cat containerd                             |                        |         |         |                     |                     |
	|         | --no-pager                                           |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo cat                           | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | /lib/systemd/system/containerd.service               |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo cat                           | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | /etc/containerd/config.toml                          |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | containerd config dump                               |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC |                     |
	|         | systemctl status crio --all                          |                        |         |         |                     |                     |
	|         | --full --no-pager                                    |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo                               | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | systemctl cat crio --no-pager                        |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo find                          | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                        |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                        |         |         |                     |                     |
	| ssh     | -p kindnet-081012 sudo crio                          | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	|         | config                                               |                        |         |         |                     |                     |
	| delete  | -p kindnet-081012                                    | kindnet-081012         | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC | 17 Nov 23 16:53 UTC |
	| start   | -p false-081012 --memory=3072                        | false-081012           | jenkins | v1.32.0 | 17 Nov 23 16:53 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                        |         |         |                     |                     |
	|         | --wait-timeout=15m --cni=false                       |                        |         |         |                     |                     |
	|         | --driver=kvm2                                        |                        |         |         |                     |                     |
	| ssh     | -p old-k8s-version-055844 sudo                       | old-k8s-version-055844 | jenkins | v1.32.0 | 17 Nov 23 16:54 UTC |                     |
	|         | crictl images -o json                                |                        |         |         |                     |                     |
	|---------|------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/17 16:53:43
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 16:53:43.503936   54653 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:53:43.504079   54653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:53:43.504087   54653 out.go:309] Setting ErrFile to fd 2...
	I1117 16:53:43.504092   54653 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:53:43.504274   54653 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	I1117 16:53:43.504817   54653 out.go:303] Setting JSON to false
	I1117 16:53:43.505888   54653 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5770,"bootTime":1700234254,"procs":327,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 16:53:43.505947   54653 start.go:138] virtualization: kvm guest
	I1117 16:53:43.508423   54653 out.go:177] * [false-081012] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 16:53:43.509923   54653 out.go:177]   - MINIKUBE_LOCATION=17634
	I1117 16:53:43.511289   54653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 16:53:43.509924   54653 notify.go:220] Checking for updates...
	I1117 16:53:43.513679   54653 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	I1117 16:53:43.515128   54653 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	I1117 16:53:43.516634   54653 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 16:53:43.518076   54653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1117 16:53:43.520178   54653 config.go:182] Loaded profile config "calico-081012": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:53:43.520332   54653 config.go:182] Loaded profile config "custom-flannel-081012": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:53:43.520494   54653 config.go:182] Loaded profile config "old-k8s-version-055844": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1117 16:53:43.520586   54653 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 16:53:43.558593   54653 out.go:177] * Using the kvm2 driver based on user configuration
	I1117 16:53:43.560079   54653 start.go:298] selected driver: kvm2
	I1117 16:53:43.560090   54653 start.go:902] validating driver "kvm2" against <nil>
	I1117 16:53:43.560106   54653 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1117 16:53:43.560750   54653 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:53:43.560846   54653 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17634-9353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 16:53:43.578959   54653 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1117 16:53:43.579016   54653 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1117 16:53:43.579318   54653 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1117 16:53:43.579402   54653 cni.go:84] Creating CNI manager for "false"
	I1117 16:53:43.579427   54653 start_flags.go:323] config:
	{Name:false-081012 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:false-081012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISo
cket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 16:53:43.579617   54653 iso.go:125] acquiring lock: {Name:mkfd0387d5051e05351c5f239ccf79a882c64dcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:53:43.582378   54653 out.go:177] * Starting control plane node false-081012 in cluster false-081012
	I1117 16:53:40.365793   52453 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.3/kubectl ...
	I1117 16:53:40.365809   52453 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (244810 bytes)
	I1117 16:53:40.391958   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1117 16:53:42.749871   52453 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (2.357869125s)
	I1117 16:53:42.749973   52453 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1117 16:53:42.750065   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:42.750194   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=49db7ae766960f8f9e07cffcbe974581755c3ae6 minikube.k8s.io/name=calico-081012 minikube.k8s.io/updated_at=2023_11_17T16_53_42_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:42.954858   52453 ops.go:34] apiserver oom_adj: -16
	I1117 16:53:42.955015   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:43.053106   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:43.641302   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:44.140827   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:44.640818   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:44.903070   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:44.903811   52964 main.go:141] libmachine: (custom-flannel-081012) Found IP for machine: 192.168.39.84
	I1117 16:53:44.903837   52964 main.go:141] libmachine: (custom-flannel-081012) Reserving static IP address...
	I1117 16:53:44.903856   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has current primary IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:44.904183   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | unable to find host DHCP lease matching {name: "custom-flannel-081012", mac: "52:54:00:ad:4a:c2", ip: "192.168.39.84"} in network mk-custom-flannel-081012
	I1117 16:53:44.985542   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | Getting to WaitForSSH function...
	I1117 16:53:44.985570   52964 main.go:141] libmachine: (custom-flannel-081012) Reserved static IP address: 192.168.39.84
	I1117 16:53:44.985584   52964 main.go:141] libmachine: (custom-flannel-081012) Waiting for SSH to be available...
	I1117 16:53:44.988239   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:44.988790   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:44.988830   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:44.989056   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | Using SSH client type: external
	I1117 16:53:44.989077   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | Using SSH private key: /home/jenkins/minikube-integration/17634-9353/.minikube/machines/custom-flannel-081012/id_rsa (-rw-------)
	I1117 16:53:44.989101   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17634-9353/.minikube/machines/custom-flannel-081012/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1117 16:53:44.989108   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | About to run SSH command:
	I1117 16:53:44.989121   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | exit 0
	I1117 16:53:45.073608   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | SSH cmd err, output: <nil>: 
	I1117 16:53:45.073876   52964 main.go:141] libmachine: (custom-flannel-081012) KVM machine creation complete!
	I1117 16:53:45.074202   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetConfigRaw
	I1117 16:53:45.074727   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .DriverName
	I1117 16:53:45.074902   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .DriverName
	I1117 16:53:45.075086   52964 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1117 16:53:45.075099   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetState
	I1117 16:53:45.076579   52964 main.go:141] libmachine: Detecting operating system of created instance...
	I1117 16:53:45.076592   52964 main.go:141] libmachine: Waiting for SSH to be available...
	I1117 16:53:45.076599   52964 main.go:141] libmachine: Getting to WaitForSSH function...
	I1117 16:53:45.076609   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:45.079159   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.079612   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:45.079641   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.079815   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:45.080035   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.080177   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.080359   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:45.080497   52964 main.go:141] libmachine: Using SSH client type: native
	I1117 16:53:45.080897   52964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1117 16:53:45.080914   52964 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1117 16:53:45.193470   52964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1117 16:53:45.193496   52964 main.go:141] libmachine: Detecting the provisioner...
	I1117 16:53:45.193504   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:45.196637   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.197020   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:45.197056   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.197215   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:45.197413   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.197602   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.197759   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:45.197913   52964 main.go:141] libmachine: Using SSH client type: native
	I1117 16:53:45.198311   52964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1117 16:53:45.198327   52964 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1117 16:53:45.306980   52964 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g21ec34a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1117 16:53:45.307033   52964 main.go:141] libmachine: found compatible host: buildroot
	I1117 16:53:45.307041   52964 main.go:141] libmachine: Provisioning with buildroot...
	I1117 16:53:45.307072   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetMachineName
	I1117 16:53:45.307358   52964 buildroot.go:166] provisioning hostname "custom-flannel-081012"
	I1117 16:53:45.307382   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetMachineName
	I1117 16:53:45.307571   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:45.310147   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.310528   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:45.310547   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.310708   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:45.310897   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.311040   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.311167   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:45.311363   52964 main.go:141] libmachine: Using SSH client type: native
	I1117 16:53:45.311660   52964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1117 16:53:45.311674   52964 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-081012 && echo "custom-flannel-081012" | sudo tee /etc/hostname
	I1117 16:53:45.431351   52964 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-081012
	
	I1117 16:53:45.431385   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:45.434725   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.435115   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:45.435157   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.435313   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:45.435530   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.435671   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.435816   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:45.435956   52964 main.go:141] libmachine: Using SSH client type: native
	I1117 16:53:45.436283   52964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1117 16:53:45.436308   52964 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-081012' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-081012/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-081012' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1117 16:53:45.554601   52964 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1117 16:53:45.554634   52964 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17634-9353/.minikube CaCertPath:/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17634-9353/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17634-9353/.minikube}
	I1117 16:53:45.554687   52964 buildroot.go:174] setting up certificates
	I1117 16:53:45.554714   52964 provision.go:83] configureAuth start
	I1117 16:53:45.554732   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetMachineName
	I1117 16:53:45.555004   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetIP
	I1117 16:53:45.557633   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.557929   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:45.557963   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.558155   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:45.560281   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.560723   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:45.560762   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.560869   52964 provision.go:138] copyHostCerts
	I1117 16:53:45.560911   52964 exec_runner.go:144] found /home/jenkins/minikube-integration/17634-9353/.minikube/ca.pem, removing ...
	I1117 16:53:45.560920   52964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17634-9353/.minikube/ca.pem
	I1117 16:53:45.560974   52964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17634-9353/.minikube/ca.pem (1082 bytes)
	I1117 16:53:45.561054   52964 exec_runner.go:144] found /home/jenkins/minikube-integration/17634-9353/.minikube/cert.pem, removing ...
	I1117 16:53:45.561062   52964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17634-9353/.minikube/cert.pem
	I1117 16:53:45.561082   52964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17634-9353/.minikube/cert.pem (1123 bytes)
	I1117 16:53:45.561129   52964 exec_runner.go:144] found /home/jenkins/minikube-integration/17634-9353/.minikube/key.pem, removing ...
	I1117 16:53:45.561135   52964 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17634-9353/.minikube/key.pem
	I1117 16:53:45.561152   52964 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17634-9353/.minikube/key.pem (1679 bytes)
	I1117 16:53:45.561200   52964 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17634-9353/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-081012 san=[192.168.39.84 192.168.39.84 localhost 127.0.0.1 minikube custom-flannel-081012]
	I1117 16:53:45.803062   52964 provision.go:172] copyRemoteCerts
	I1117 16:53:45.803114   52964 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1117 16:53:45.803137   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:45.805818   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.806124   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:45.806157   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.806416   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:45.806610   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.806775   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:45.806949   52964 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/custom-flannel-081012/id_rsa Username:docker}
	I1117 16:53:45.891384   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1117 16:53:45.913912   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1117 16:53:45.935716   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1117 16:53:45.957645   52964 provision.go:86] duration metric: configureAuth took 402.915914ms
	I1117 16:53:45.957670   52964 buildroot.go:189] setting minikube options for container-runtime
	I1117 16:53:45.957840   52964 config.go:182] Loaded profile config "custom-flannel-081012": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:53:45.957866   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .DriverName
	I1117 16:53:45.958172   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:45.961022   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.961464   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:45.961489   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:45.961684   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:45.961920   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.962129   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:45.962307   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:45.962488   52964 main.go:141] libmachine: Using SSH client type: native
	I1117 16:53:45.962809   52964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1117 16:53:45.962824   52964 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1117 16:53:46.071748   52964 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I1117 16:53:46.071770   52964 buildroot.go:70] root file system type: tmpfs
	I1117 16:53:46.071889   52964 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1117 16:53:46.071909   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:46.074826   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:46.075118   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:46.075146   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:46.075317   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:46.075524   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:46.075652   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:46.075760   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:46.075883   52964 main.go:141] libmachine: Using SSH client type: native
	I1117 16:53:46.076179   52964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1117 16:53:46.076237   52964 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1117 16:53:46.201337   52964 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1117 16:53:46.201376   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:46.204173   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:46.204513   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:46.204542   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:46.204761   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:46.204962   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:46.205174   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:46.205356   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:46.205535   52964 main.go:141] libmachine: Using SSH client type: native
	I1117 16:53:46.205889   52964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1117 16:53:46.205919   52964 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1117 16:53:43.583937   54653 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1117 16:53:43.583983   54653 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17634-9353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1117 16:53:43.583993   54653 cache.go:56] Caching tarball of preloaded images
	I1117 16:53:43.584084   54653 preload.go:174] Found /home/jenkins/minikube-integration/17634-9353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1117 16:53:43.584096   54653 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1117 16:53:43.584205   54653 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/false-081012/config.json ...
	I1117 16:53:43.584227   54653 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/false-081012/config.json: {Name:mk93d415f36612f7e3560bc80a99ffaf502aadfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:53:43.584402   54653 start.go:365] acquiring machines lock for false-081012: {Name:mk6ad0795a1bc343dcb7c179b8c56e6ba763a05d Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1117 16:53:47.226846   54653 start.go:369] acquired machines lock for "false-081012" in 3.642418665s
	I1117 16:53:47.226917   54653 start.go:93] Provisioning new machine with config: &{Name:false-081012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.3 ClusterName:false-081012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1117 16:53:47.227022   54653 start.go:125] createHost starting for "" (driver="kvm2")
	I1117 16:53:47.229521   54653 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1117 16:53:47.229756   54653 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:53:47.229796   54653 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:53:47.249699   54653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44157
	I1117 16:53:47.250233   54653 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:53:47.250762   54653 main.go:141] libmachine: Using API Version  1
	I1117 16:53:47.250788   54653 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:53:47.251145   54653 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:53:47.251340   54653 main.go:141] libmachine: (false-081012) Calling .GetMachineName
	I1117 16:53:47.251488   54653 main.go:141] libmachine: (false-081012) Calling .DriverName
	I1117 16:53:47.251618   54653 start.go:159] libmachine.API.Create for "false-081012" (driver="kvm2")
	I1117 16:53:47.251656   54653 client.go:168] LocalClient.Create starting
	I1117 16:53:47.251691   54653 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem
	I1117 16:53:47.251729   54653 main.go:141] libmachine: Decoding PEM data...
	I1117 16:53:47.251751   54653 main.go:141] libmachine: Parsing certificate...
	I1117 16:53:47.251810   54653 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17634-9353/.minikube/certs/cert.pem
	I1117 16:53:47.251834   54653 main.go:141] libmachine: Decoding PEM data...
	I1117 16:53:47.251852   54653 main.go:141] libmachine: Parsing certificate...
	I1117 16:53:47.251878   54653 main.go:141] libmachine: Running pre-create checks...
	I1117 16:53:47.251891   54653 main.go:141] libmachine: (false-081012) Calling .PreCreateCheck
	I1117 16:53:47.252210   54653 main.go:141] libmachine: (false-081012) Calling .GetConfigRaw
	I1117 16:53:47.252573   54653 main.go:141] libmachine: Creating machine...
	I1117 16:53:47.252587   54653 main.go:141] libmachine: (false-081012) Calling .Create
	I1117 16:53:47.252717   54653 main.go:141] libmachine: (false-081012) Creating KVM machine...
	I1117 16:53:47.253967   54653 main.go:141] libmachine: (false-081012) DBG | found existing default KVM network
	I1117 16:53:47.255550   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:47.255356   54715 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d9:01:3d} reservation:<nil>}
	I1117 16:53:47.256381   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:47.256318   54715 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:d0:02:4d} reservation:<nil>}
	I1117 16:53:47.257761   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:47.257675   54715 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015ec0}
	I1117 16:53:47.263034   54653 main.go:141] libmachine: (false-081012) DBG | trying to create private KVM network mk-false-081012 192.168.61.0/24...
	I1117 16:53:47.345260   54653 main.go:141] libmachine: (false-081012) DBG | private KVM network mk-false-081012 192.168.61.0/24 created
	I1117 16:53:47.345293   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:47.345233   54715 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17634-9353/.minikube
	I1117 16:53:47.345315   54653 main.go:141] libmachine: (false-081012) Setting up store path in /home/jenkins/minikube-integration/17634-9353/.minikube/machines/false-081012 ...
	I1117 16:53:47.345333   54653 main.go:141] libmachine: (false-081012) Building disk image from file:///home/jenkins/minikube-integration/17634-9353/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1117 16:53:47.345435   54653 main.go:141] libmachine: (false-081012) Downloading /home/jenkins/minikube-integration/17634-9353/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17634-9353/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso...
	I1117 16:53:47.597490   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:47.597358   54715 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17634-9353/.minikube/machines/false-081012/id_rsa...
	I1117 16:53:47.878321   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:47.878179   54715 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17634-9353/.minikube/machines/false-081012/false-081012.rawdisk...
	I1117 16:53:47.878357   54653 main.go:141] libmachine: (false-081012) DBG | Writing magic tar header
	I1117 16:53:47.878374   54653 main.go:141] libmachine: (false-081012) DBG | Writing SSH key tar header
	I1117 16:53:47.878461   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:47.878370   54715 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17634-9353/.minikube/machines/false-081012 ...
	I1117 16:53:47.878520   54653 main.go:141] libmachine: (false-081012) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17634-9353/.minikube/machines/false-081012
	I1117 16:53:47.878554   54653 main.go:141] libmachine: (false-081012) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17634-9353/.minikube/machines
	I1117 16:53:47.878571   54653 main.go:141] libmachine: (false-081012) Setting executable bit set on /home/jenkins/minikube-integration/17634-9353/.minikube/machines/false-081012 (perms=drwx------)
	I1117 16:53:47.878589   54653 main.go:141] libmachine: (false-081012) Setting executable bit set on /home/jenkins/minikube-integration/17634-9353/.minikube/machines (perms=drwxr-xr-x)
	I1117 16:53:47.878606   54653 main.go:141] libmachine: (false-081012) Setting executable bit set on /home/jenkins/minikube-integration/17634-9353/.minikube (perms=drwxr-xr-x)
	I1117 16:53:47.878627   54653 main.go:141] libmachine: (false-081012) Setting executable bit set on /home/jenkins/minikube-integration/17634-9353 (perms=drwxrwxr-x)
	I1117 16:53:47.878649   54653 main.go:141] libmachine: (false-081012) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1117 16:53:47.878663   54653 main.go:141] libmachine: (false-081012) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17634-9353/.minikube
	I1117 16:53:47.878684   54653 main.go:141] libmachine: (false-081012) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17634-9353
	I1117 16:53:47.878699   54653 main.go:141] libmachine: (false-081012) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1117 16:53:47.878729   54653 main.go:141] libmachine: (false-081012) DBG | Checking permissions on dir: /home/jenkins
	I1117 16:53:47.878764   54653 main.go:141] libmachine: (false-081012) DBG | Checking permissions on dir: /home
	I1117 16:53:47.878780   54653 main.go:141] libmachine: (false-081012) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1117 16:53:47.878797   54653 main.go:141] libmachine: (false-081012) Creating domain...
	I1117 16:53:47.878812   54653 main.go:141] libmachine: (false-081012) DBG | Skipping /home - not owner
	I1117 16:53:47.880098   54653 main.go:141] libmachine: (false-081012) define libvirt domain using xml: 
	I1117 16:53:47.880124   54653 main.go:141] libmachine: (false-081012) <domain type='kvm'>
	I1117 16:53:47.880145   54653 main.go:141] libmachine: (false-081012)   <name>false-081012</name>
	I1117 16:53:47.880159   54653 main.go:141] libmachine: (false-081012)   <memory unit='MiB'>3072</memory>
	I1117 16:53:47.880174   54653 main.go:141] libmachine: (false-081012)   <vcpu>2</vcpu>
	I1117 16:53:47.880186   54653 main.go:141] libmachine: (false-081012)   <features>
	I1117 16:53:47.880197   54653 main.go:141] libmachine: (false-081012)     <acpi/>
	I1117 16:53:47.880209   54653 main.go:141] libmachine: (false-081012)     <apic/>
	I1117 16:53:47.880223   54653 main.go:141] libmachine: (false-081012)     <pae/>
	I1117 16:53:47.880234   54653 main.go:141] libmachine: (false-081012)     
	I1117 16:53:47.880245   54653 main.go:141] libmachine: (false-081012)   </features>
	I1117 16:53:47.880258   54653 main.go:141] libmachine: (false-081012)   <cpu mode='host-passthrough'>
	I1117 16:53:47.880271   54653 main.go:141] libmachine: (false-081012)   
	I1117 16:53:47.880285   54653 main.go:141] libmachine: (false-081012)   </cpu>
	I1117 16:53:47.880298   54653 main.go:141] libmachine: (false-081012)   <os>
	I1117 16:53:47.880311   54653 main.go:141] libmachine: (false-081012)     <type>hvm</type>
	I1117 16:53:47.880325   54653 main.go:141] libmachine: (false-081012)     <boot dev='cdrom'/>
	I1117 16:53:47.880336   54653 main.go:141] libmachine: (false-081012)     <boot dev='hd'/>
	I1117 16:53:47.880347   54653 main.go:141] libmachine: (false-081012)     <bootmenu enable='no'/>
	I1117 16:53:47.880359   54653 main.go:141] libmachine: (false-081012)   </os>
	I1117 16:53:47.880372   54653 main.go:141] libmachine: (false-081012)   <devices>
	I1117 16:53:47.880386   54653 main.go:141] libmachine: (false-081012)     <disk type='file' device='cdrom'>
	I1117 16:53:47.880405   54653 main.go:141] libmachine: (false-081012)       <source file='/home/jenkins/minikube-integration/17634-9353/.minikube/machines/false-081012/boot2docker.iso'/>
	I1117 16:53:47.880419   54653 main.go:141] libmachine: (false-081012)       <target dev='hdc' bus='scsi'/>
	I1117 16:53:47.880432   54653 main.go:141] libmachine: (false-081012)       <readonly/>
	I1117 16:53:47.880444   54653 main.go:141] libmachine: (false-081012)     </disk>
	I1117 16:53:47.880459   54653 main.go:141] libmachine: (false-081012)     <disk type='file' device='disk'>
	I1117 16:53:47.880473   54653 main.go:141] libmachine: (false-081012)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1117 16:53:47.880492   54653 main.go:141] libmachine: (false-081012)       <source file='/home/jenkins/minikube-integration/17634-9353/.minikube/machines/false-081012/false-081012.rawdisk'/>
	I1117 16:53:47.880505   54653 main.go:141] libmachine: (false-081012)       <target dev='hda' bus='virtio'/>
	I1117 16:53:47.880519   54653 main.go:141] libmachine: (false-081012)     </disk>
	I1117 16:53:47.880532   54653 main.go:141] libmachine: (false-081012)     <interface type='network'>
	I1117 16:53:47.880547   54653 main.go:141] libmachine: (false-081012)       <source network='mk-false-081012'/>
	I1117 16:53:47.880560   54653 main.go:141] libmachine: (false-081012)       <model type='virtio'/>
	I1117 16:53:47.880573   54653 main.go:141] libmachine: (false-081012)     </interface>
	I1117 16:53:47.880586   54653 main.go:141] libmachine: (false-081012)     <interface type='network'>
	I1117 16:53:47.880600   54653 main.go:141] libmachine: (false-081012)       <source network='default'/>
	I1117 16:53:47.880614   54653 main.go:141] libmachine: (false-081012)       <model type='virtio'/>
	I1117 16:53:47.880629   54653 main.go:141] libmachine: (false-081012)     </interface>
	I1117 16:53:47.880641   54653 main.go:141] libmachine: (false-081012)     <serial type='pty'>
	I1117 16:53:47.880655   54653 main.go:141] libmachine: (false-081012)       <target port='0'/>
	I1117 16:53:47.880667   54653 main.go:141] libmachine: (false-081012)     </serial>
	I1117 16:53:47.880680   54653 main.go:141] libmachine: (false-081012)     <console type='pty'>
	I1117 16:53:47.880700   54653 main.go:141] libmachine: (false-081012)       <target type='serial' port='0'/>
	I1117 16:53:47.880718   54653 main.go:141] libmachine: (false-081012)     </console>
	I1117 16:53:47.880731   54653 main.go:141] libmachine: (false-081012)     <rng model='virtio'>
	I1117 16:53:47.880746   54653 main.go:141] libmachine: (false-081012)       <backend model='random'>/dev/random</backend>
	I1117 16:53:47.880757   54653 main.go:141] libmachine: (false-081012)     </rng>
	I1117 16:53:47.880770   54653 main.go:141] libmachine: (false-081012)     
	I1117 16:53:47.880782   54653 main.go:141] libmachine: (false-081012)     
	I1117 16:53:47.880795   54653 main.go:141] libmachine: (false-081012)   </devices>
	I1117 16:53:47.880806   54653 main.go:141] libmachine: (false-081012) </domain>
	I1117 16:53:47.880821   54653 main.go:141] libmachine: (false-081012) 
	I1117 16:53:47.885592   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:57:51:ec in network default
	I1117 16:53:47.886153   54653 main.go:141] libmachine: (false-081012) Ensuring networks are active...
	I1117 16:53:47.886184   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:47.886902   54653 main.go:141] libmachine: (false-081012) Ensuring network default is active
	I1117 16:53:47.887230   54653 main.go:141] libmachine: (false-081012) Ensuring network mk-false-081012 is active
	I1117 16:53:47.887824   54653 main.go:141] libmachine: (false-081012) Getting domain xml...
	I1117 16:53:47.888640   54653 main.go:141] libmachine: (false-081012) Creating domain...
	I1117 16:53:45.140933   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:45.641560   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:46.141347   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:46.641675   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:47.141477   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:47.641252   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:48.141351   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:48.640914   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:49.141406   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:49.640790   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:46.978395   52964 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I1117 16:53:46.978437   52964 main.go:141] libmachine: Checking connection to Docker...
	I1117 16:53:46.978450   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetURL
	I1117 16:53:46.979813   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | Using libvirt version 6000000
	I1117 16:53:46.982207   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:46.982520   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:46.982549   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:46.982747   52964 main.go:141] libmachine: Docker is up and running!
	I1117 16:53:46.982765   52964 main.go:141] libmachine: Reticulating splines...
	I1117 16:53:46.982772   52964 client.go:171] LocalClient.Create took 29.369879792s
	I1117 16:53:46.982791   52964 start.go:167] duration metric: libmachine.API.Create for "custom-flannel-081012" took 29.369934832s
	I1117 16:53:46.982800   52964 start.go:300] post-start starting for "custom-flannel-081012" (driver="kvm2")
	I1117 16:53:46.982808   52964 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1117 16:53:46.982836   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .DriverName
	I1117 16:53:46.983091   52964 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1117 16:53:46.983125   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:46.985484   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:46.985791   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:46.985818   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:46.986021   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:46.986220   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:46.986364   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:46.986499   52964 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/custom-flannel-081012/id_rsa Username:docker}
	I1117 16:53:47.070926   52964 ssh_runner.go:195] Run: cat /etc/os-release
	I1117 16:53:47.074727   52964 info.go:137] Remote host: Buildroot 2021.02.12
	I1117 16:53:47.074751   52964 filesync.go:126] Scanning /home/jenkins/minikube-integration/17634-9353/.minikube/addons for local assets ...
	I1117 16:53:47.074829   52964 filesync.go:126] Scanning /home/jenkins/minikube-integration/17634-9353/.minikube/files for local assets ...
	I1117 16:53:47.074912   52964 filesync.go:149] local asset: /home/jenkins/minikube-integration/17634-9353/.minikube/files/etc/ssl/certs/165582.pem -> 165582.pem in /etc/ssl/certs
	I1117 16:53:47.075018   52964 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1117 16:53:47.082535   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/files/etc/ssl/certs/165582.pem --> /etc/ssl/certs/165582.pem (1708 bytes)
	I1117 16:53:47.106221   52964 start.go:303] post-start completed in 123.408782ms
	I1117 16:53:47.106271   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetConfigRaw
	I1117 16:53:47.106942   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetIP
	I1117 16:53:47.109664   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.110118   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:47.110154   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.110405   52964 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/config.json ...
	I1117 16:53:47.110595   52964 start.go:128] duration metric: createHost completed in 29.519599323s
	I1117 16:53:47.110622   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:47.113467   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.113806   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:47.113837   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.114034   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:47.114292   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:47.114499   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:47.114665   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:47.114842   52964 main.go:141] libmachine: Using SSH client type: native
	I1117 16:53:47.115237   52964 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808940] 0x80b620 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I1117 16:53:47.115265   52964 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1117 16:53:47.226696   52964 main.go:141] libmachine: SSH cmd err, output: <nil>: 1700240027.213773859
	
	I1117 16:53:47.226717   52964 fix.go:206] guest clock: 1700240027.213773859
	I1117 16:53:47.226724   52964 fix.go:219] Guest: 2023-11-17 16:53:47.213773859 +0000 UTC Remote: 2023-11-17 16:53:47.110606819 +0000 UTC m=+50.557811588 (delta=103.16704ms)
	I1117 16:53:47.226757   52964 fix.go:190] guest clock delta is within tolerance: 103.16704ms
	I1117 16:53:47.226762   52964 start.go:83] releasing machines lock for "custom-flannel-081012", held for 29.63595918s
	I1117 16:53:47.226787   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .DriverName
	I1117 16:53:47.227110   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetIP
	I1117 16:53:47.229918   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.230341   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:47.230369   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.230534   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .DriverName
	I1117 16:53:47.231077   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .DriverName
	I1117 16:53:47.231248   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .DriverName
	I1117 16:53:47.231339   52964 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1117 16:53:47.231387   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:47.231481   52964 ssh_runner.go:195] Run: cat /version.json
	I1117 16:53:47.231512   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHHostname
	I1117 16:53:47.234290   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.234484   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.234647   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:47.234681   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.234783   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:47.234805   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:47.234961   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:47.235030   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHPort
	I1117 16:53:47.235135   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:47.235221   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHKeyPath
	I1117 16:53:47.235294   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:47.235370   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetSSHUsername
	I1117 16:53:47.235420   52964 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/custom-flannel-081012/id_rsa Username:docker}
	I1117 16:53:47.235488   52964 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/custom-flannel-081012/id_rsa Username:docker}
	I1117 16:53:47.319241   52964 ssh_runner.go:195] Run: systemctl --version
	I1117 16:53:47.358287   52964 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1117 16:53:47.367840   52964 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1117 16:53:47.367907   52964 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1117 16:53:47.389383   52964 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1117 16:53:47.389410   52964 start.go:472] detecting cgroup driver to use...
	I1117 16:53:47.389545   52964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1117 16:53:47.409581   52964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1117 16:53:47.418828   52964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1117 16:53:47.431858   52964 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I1117 16:53:47.431939   52964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1117 16:53:47.444786   52964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1117 16:53:47.457735   52964 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1117 16:53:47.470862   52964 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1117 16:53:47.483703   52964 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1117 16:53:47.494066   52964 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1117 16:53:47.507089   52964 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1117 16:53:47.515447   52964 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1117 16:53:47.527021   52964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 16:53:47.642923   52964 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1117 16:53:47.662536   52964 start.go:472] detecting cgroup driver to use...
	I1117 16:53:47.662624   52964 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1117 16:53:47.680944   52964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1117 16:53:47.707648   52964 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1117 16:53:47.728725   52964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1117 16:53:47.741868   52964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1117 16:53:47.753270   52964 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1117 16:53:47.787788   52964 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1117 16:53:47.800327   52964 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1117 16:53:47.820101   52964 ssh_runner.go:195] Run: which cri-dockerd
	I1117 16:53:47.823838   52964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1117 16:53:47.831814   52964 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1117 16:53:47.846245   52964 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1117 16:53:47.955570   52964 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1117 16:53:48.065562   52964 docker.go:560] configuring docker to use "cgroupfs" as cgroup driver...
	I1117 16:53:48.065678   52964 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1117 16:53:48.081979   52964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 16:53:48.211153   52964 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1117 16:53:49.614642   52964 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.403448771s)
	I1117 16:53:49.614725   52964 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1117 16:53:49.720957   52964 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1117 16:53:49.839469   52964 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1117 16:53:49.970214   52964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 16:53:50.094024   52964 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1117 16:53:50.111028   52964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 16:53:50.227757   52964 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1117 16:53:50.307217   52964 start.go:519] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1117 16:53:50.307292   52964 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1117 16:53:50.312989   52964 start.go:540] Will wait 60s for crictl version
	I1117 16:53:50.313078   52964 ssh_runner.go:195] Run: which crictl
	I1117 16:53:50.316843   52964 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1117 16:53:50.379279   52964 start.go:556] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  24.0.7
	RuntimeApiVersion:  v1
	I1117 16:53:50.379344   52964 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1117 16:53:50.405073   52964 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1117 16:53:50.435130   52964 out.go:204] * Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
	I1117 16:53:50.435170   52964 main.go:141] libmachine: (custom-flannel-081012) Calling .GetIP
	I1117 16:53:50.437963   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:50.438427   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ad:4a:c2", ip: ""} in network mk-custom-flannel-081012: {Iface:virbr3 ExpiryTime:2023-11-17 17:53:34 +0000 UTC Type:0 Mac:52:54:00:ad:4a:c2 Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:custom-flannel-081012 Clientid:01:52:54:00:ad:4a:c2}
	I1117 16:53:50.438461   52964 main.go:141] libmachine: (custom-flannel-081012) DBG | domain custom-flannel-081012 has defined IP address 192.168.39.84 and MAC address 52:54:00:ad:4a:c2 in network mk-custom-flannel-081012
	I1117 16:53:50.438727   52964 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1117 16:53:50.442697   52964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1117 16:53:50.455344   52964 localpath.go:92] copying /home/jenkins/minikube-integration/17634-9353/.minikube/client.crt -> /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/client.crt
	I1117 16:53:50.455518   52964 localpath.go:117] copying /home/jenkins/minikube-integration/17634-9353/.minikube/client.key -> /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/client.key
	I1117 16:53:50.455656   52964 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1117 16:53:50.455725   52964 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1117 16:53:50.477551   52964 docker.go:671] Got preloaded images: 
	I1117 16:53:50.477577   52964 docker.go:677] registry.k8s.io/kube-apiserver:v1.28.3 wasn't preloaded
	I1117 16:53:50.477630   52964 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1117 16:53:50.486431   52964 ssh_runner.go:195] Run: which lz4
	I1117 16:53:50.490982   52964 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1117 16:53:50.496091   52964 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1117 16:53:50.496120   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (422944352 bytes)
	I1117 16:53:50.141542   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:50.640733   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:51.141704   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:51.641468   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:52.141152   52453 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1117 16:53:52.323024   52453 kubeadm.go:1081] duration metric: took 9.573039796s to wait for elevateKubeSystemPrivileges.
	I1117 16:53:52.323059   52453 kubeadm.go:406] StartCluster complete in 24.079427448s
	I1117 16:53:52.323078   52453 settings.go:142] acquiring lock: {Name:mk8ab4d63ea1a23bc84b956d1b9f549cfe694b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:53:52.323145   52453 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17634-9353/kubeconfig
	I1117 16:53:52.324569   52453 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9353/kubeconfig: {Name:mk607dcc666a8f6951ac185c35151aaa98e0c9e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:53:52.338693   52453 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1117 16:53:52.338762   52453 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1117 16:53:52.338863   52453 addons.go:69] Setting storage-provisioner=true in profile "calico-081012"
	I1117 16:53:52.338880   52453 addons.go:231] Setting addon storage-provisioner=true in "calico-081012"
	I1117 16:53:52.338930   52453 host.go:66] Checking if "calico-081012" exists ...
	I1117 16:53:52.339012   52453 config.go:182] Loaded profile config "calico-081012": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:53:52.339076   52453 cache.go:107] acquiring lock: {Name:mkf6a13e5a5a5c665ab2bdaf32714a22dc9d43ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 16:53:52.339152   52453 cache.go:115] /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 exists
	I1117 16:53:52.339164   52453 cache.go:96] cache image "gcr.io/k8s-minikube/gvisor-addon:2" -> "/home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2" took 93.898µs
	I1117 16:53:52.339175   52453 cache.go:80] save to tar file gcr.io/k8s-minikube/gvisor-addon:2 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 succeeded
	I1117 16:53:52.339183   52453 cache.go:87] Successfully saved all images to host disk.
	I1117 16:53:52.339353   52453 config.go:182] Loaded profile config "calico-081012": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:53:52.339369   52453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:53:52.339405   52453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:53:52.339610   52453 addons.go:69] Setting default-storageclass=true in profile "calico-081012"
	I1117 16:53:52.339635   52453 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-081012"
	I1117 16:53:52.339724   52453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:53:52.339750   52453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:53:52.340030   52453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:53:52.340066   52453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:53:52.358247   52453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42327
	I1117 16:53:52.358680   52453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33913
	I1117 16:53:52.359123   52453 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:53:52.359604   52453 main.go:141] libmachine: Using API Version  1
	I1117 16:53:52.359624   52453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:53:52.359697   52453 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:53:52.360050   52453 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:53:52.360202   52453 main.go:141] libmachine: (calico-081012) Calling .GetState
	I1117 16:53:52.361397   52453 main.go:141] libmachine: Using API Version  1
	I1117 16:53:52.361417   52453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:53:52.361990   52453 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:53:52.362585   52453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:53:52.362610   52453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:53:52.363473   52453 addons.go:231] Setting addon default-storageclass=true in "calico-081012"
	I1117 16:53:52.363503   52453 host.go:66] Checking if "calico-081012" exists ...
	I1117 16:53:52.363810   52453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:53:52.363834   52453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:53:52.367132   52453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35391
	I1117 16:53:52.367575   52453 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:53:52.368422   52453 main.go:141] libmachine: Using API Version  1
	I1117 16:53:52.368442   52453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:53:52.370184   52453 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:53:52.370368   52453 main.go:141] libmachine: (calico-081012) Calling .GetState
	I1117 16:53:52.372707   52453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:53:52.372748   52453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:53:52.383391   52453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42893
	I1117 16:53:52.384943   52453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40239
	I1117 16:53:52.385288   52453 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:53:52.385743   52453 main.go:141] libmachine: Using API Version  1
	I1117 16:53:52.385764   52453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:53:52.385844   52453 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:53:52.386075   52453 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:53:52.386274   52453 main.go:141] libmachine: (calico-081012) Calling .GetState
	I1117 16:53:52.387264   52453 main.go:141] libmachine: Using API Version  1
	I1117 16:53:52.387284   52453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:53:52.387876   52453 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:53:52.388478   52453 main.go:141] libmachine: (calico-081012) Calling .DriverName
	I1117 16:53:52.388939   52453 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:53:52.388971   52453 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:53:52.480067   52453 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1117 16:53:52.391466   52453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34213
	I1117 16:53:52.408721   52453 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42595
	I1117 16:53:52.521460   52453 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1117 16:53:52.521479   52453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1117 16:53:52.521505   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHHostname
	I1117 16:53:52.523055   52453 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:53:52.523180   52453 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:53:52.523887   52453 main.go:141] libmachine: Using API Version  1
	I1117 16:53:52.523908   52453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:53:52.524079   52453 main.go:141] libmachine: Using API Version  1
	I1117 16:53:52.524099   52453 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:53:52.524509   52453 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:53:52.524581   52453 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:53:52.524751   52453 main.go:141] libmachine: (calico-081012) Calling .DriverName
	I1117 16:53:52.524936   52453 main.go:141] libmachine: (calico-081012) Calling .GetState
	I1117 16:53:52.524955   52453 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1117 16:53:52.524978   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHHostname
	I1117 16:53:52.526464   52453 main.go:141] libmachine: (calico-081012) DBG | domain calico-081012 has defined MAC address 52:54:00:7d:b9:f7 in network mk-calico-081012
	I1117 16:53:52.527090   52453 main.go:141] libmachine: (calico-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b9:f7", ip: ""} in network mk-calico-081012: {Iface:virbr4 ExpiryTime:2023-11-17 17:53:07 +0000 UTC Type:0 Mac:52:54:00:7d:b9:f7 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:calico-081012 Clientid:01:52:54:00:7d:b9:f7}
	I1117 16:53:52.527119   52453 main.go:141] libmachine: (calico-081012) DBG | domain calico-081012 has defined IP address 192.168.72.246 and MAC address 52:54:00:7d:b9:f7 in network mk-calico-081012
	I1117 16:53:52.527365   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHPort
	I1117 16:53:52.527421   52453 main.go:141] libmachine: (calico-081012) Calling .DriverName
	I1117 16:53:52.527555   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHKeyPath
	I1117 16:53:52.527664   52453 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1117 16:53:52.527678   52453 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1117 16:53:52.527694   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHHostname
	I1117 16:53:52.527723   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHUsername
	I1117 16:53:52.527899   52453 sshutil.go:53] new ssh client: &{IP:192.168.72.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/calico-081012/id_rsa Username:docker}
	I1117 16:53:52.529435   52453 main.go:141] libmachine: (calico-081012) DBG | domain calico-081012 has defined MAC address 52:54:00:7d:b9:f7 in network mk-calico-081012
	I1117 16:53:52.531011   52453 main.go:141] libmachine: (calico-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b9:f7", ip: ""} in network mk-calico-081012: {Iface:virbr4 ExpiryTime:2023-11-17 17:53:07 +0000 UTC Type:0 Mac:52:54:00:7d:b9:f7 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:calico-081012 Clientid:01:52:54:00:7d:b9:f7}
	I1117 16:53:52.531035   52453 main.go:141] libmachine: (calico-081012) DBG | domain calico-081012 has defined IP address 192.168.72.246 and MAC address 52:54:00:7d:b9:f7 in network mk-calico-081012
	I1117 16:53:52.531116   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHPort
	I1117 16:53:52.531349   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHKeyPath
	I1117 16:53:52.531580   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHUsername
	I1117 16:53:52.531748   52453 sshutil.go:53] new ssh client: &{IP:192.168.72.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/calico-081012/id_rsa Username:docker}
	I1117 16:53:52.532289   52453 main.go:141] libmachine: (calico-081012) DBG | domain calico-081012 has defined MAC address 52:54:00:7d:b9:f7 in network mk-calico-081012
	I1117 16:53:52.532366   52453 main.go:141] libmachine: (calico-081012) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:7d:b9:f7", ip: ""} in network mk-calico-081012: {Iface:virbr4 ExpiryTime:2023-11-17 17:53:07 +0000 UTC Type:0 Mac:52:54:00:7d:b9:f7 Iaid: IPaddr:192.168.72.246 Prefix:24 Hostname:calico-081012 Clientid:01:52:54:00:7d:b9:f7}
	I1117 16:53:52.532398   52453 main.go:141] libmachine: (calico-081012) DBG | domain calico-081012 has defined IP address 192.168.72.246 and MAC address 52:54:00:7d:b9:f7 in network mk-calico-081012
	I1117 16:53:52.532723   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHPort
	I1117 16:53:52.532897   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHKeyPath
	I1117 16:53:52.533039   52453 main.go:141] libmachine: (calico-081012) Calling .GetSSHUsername
	I1117 16:53:52.533167   52453 sshutil.go:53] new ssh client: &{IP:192.168.72.246 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/calico-081012/id_rsa Username:docker}
	I1117 16:53:52.705406   52453 kapi.go:248] "coredns" deployment in "kube-system" namespace and "calico-081012" context rescaled to 1 replicas
	I1117 16:53:52.705451   52453 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.246 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1117 16:53:52.813923   52453 out.go:177] * Verifying Kubernetes components...
	I1117 16:53:49.164231   54653 main.go:141] libmachine: (false-081012) Waiting to get IP...
	I1117 16:53:49.165232   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:49.165774   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:49.165817   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:49.165754   54715 retry.go:31] will retry after 198.244513ms: waiting for machine to come up
	I1117 16:53:49.365232   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:49.365835   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:49.365863   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:49.365795   54715 retry.go:31] will retry after 339.964167ms: waiting for machine to come up
	I1117 16:53:49.707701   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:49.708473   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:49.708503   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:49.708419   54715 retry.go:31] will retry after 334.364207ms: waiting for machine to come up
	I1117 16:53:50.043920   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:50.044469   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:50.044494   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:50.044410   54715 retry.go:31] will retry after 449.555885ms: waiting for machine to come up
	I1117 16:53:50.496073   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:50.496685   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:50.496746   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:50.496642   54715 retry.go:31] will retry after 561.993061ms: waiting for machine to come up
	I1117 16:53:51.060893   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:51.061561   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:51.061586   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:51.061456   54715 retry.go:31] will retry after 613.539374ms: waiting for machine to come up
	I1117 16:53:51.676325   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:51.676869   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:51.676902   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:51.676814   54715 retry.go:31] will retry after 812.529642ms: waiting for machine to come up
	I1117 16:53:52.491235   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:52.491703   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:52.491735   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:52.491659   54715 retry.go:31] will retry after 1.128581484s: waiting for machine to come up
	I1117 16:53:53.442940   46863 system_pods.go:86] 8 kube-system pods found
	I1117 16:53:53.442968   46863 system_pods.go:89] "coredns-5644d7b6d9-d6skc" [b84791e6-06c3-4206-8c48-70951894142f] Running
	I1117 16:53:53.442973   46863 system_pods.go:89] "etcd-old-k8s-version-055844" [63d05fd3-f7d3-4157-86c9-f83d5eb76c6c] Running
	I1117 16:53:53.442978   46863 system_pods.go:89] "kube-apiserver-old-k8s-version-055844" [73b77c69-134c-45ff-8c61-dadcb73424ed] Running
	I1117 16:53:53.442983   46863 system_pods.go:89] "kube-controller-manager-old-k8s-version-055844" [7b8811c7-9633-494d-b754-5e65ced44ca8] Running
	I1117 16:53:53.442987   46863 system_pods.go:89] "kube-proxy-tbz2n" [5d4d7425-8266-48f8-b871-4aa0b32b0e92] Running
	I1117 16:53:53.442990   46863 system_pods.go:89] "kube-scheduler-old-k8s-version-055844" [e9d140e1-2943-47d5-a9eb-2395049f6cfa] Running
	I1117 16:53:53.442997   46863 system_pods.go:89] "metrics-server-74d5856cc6-wrzx4" [0268e920-138e-4270-bff4-5634aa4671e0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1117 16:53:53.443002   46863 system_pods.go:89] "storage-provisioner" [f83ce743-bf52-473e-9c9b-34e9c4f96dec] Running
	I1117 16:53:53.443012   46863 system_pods.go:126] duration metric: took 1m8.9503801s to wait for k8s-apps to be running ...
	I1117 16:53:53.443020   46863 system_svc.go:44] waiting for kubelet service to be running ....
	I1117 16:53:53.443072   46863 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:53:53.467985   46863 system_svc.go:56] duration metric: took 24.956724ms WaitForService to wait for kubelet.
	I1117 16:53:53.468020   46863 kubeadm.go:581] duration metric: took 1m13.838852069s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1117 16:53:53.468043   46863 node_conditions.go:102] verifying NodePressure condition ...
	I1117 16:53:53.472519   46863 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1117 16:53:53.472550   46863 node_conditions.go:123] node cpu capacity is 2
	I1117 16:53:53.472571   46863 node_conditions.go:105] duration metric: took 4.514266ms to run NodePressure ...
	I1117 16:53:53.472583   46863 start.go:228] waiting for startup goroutines ...
	I1117 16:53:53.472592   46863 start.go:233] waiting for cluster config update ...
	I1117 16:53:53.472605   46863 start.go:242] writing updated cluster config ...
	I1117 16:53:53.472917   46863 ssh_runner.go:195] Run: rm -f paused
	I1117 16:53:53.558305   46863 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1117 16:53:53.560298   46863 out.go:177] 
	W1117 16:53:53.562254   46863 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1117 16:53:53.563778   46863 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1117 16:53:53.565754   46863 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-055844" cluster and "default" namespace by default
	I1117 16:53:52.733719   52453 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1117 16:53:52.733795   52453 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1117 16:53:52.764161   52453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1117 16:53:52.769658   52453 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1117 16:53:52.866256   52453 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:53:52.866322   52453 docker.go:677] gcr.io/k8s-minikube/gvisor-addon:2 wasn't preloaded
	I1117 16:53:52.866348   52453 cache_images.go:88] LoadImages start: [gcr.io/k8s-minikube/gvisor-addon:2]
	I1117 16:53:52.868634   52453 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/gvisor-addon:2
	I1117 16:53:52.009619   52964 docker.go:635] Took 1.518668 seconds to copy over tarball
	I1117 16:53:52.009712   52964 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1117 16:53:55.267050   52964 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.257302467s)
	I1117 16:53:55.267079   52964 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1117 16:53:55.313220   52964 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1117 16:53:55.322730   52964 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2629 bytes)
	I1117 16:53:55.342027   52964 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1117 16:53:55.467756   52964 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1117 16:53:56.847831   52453 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.98124411s)
	I1117 16:53:56.847866   52453 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1117 16:53:56.847895   52453 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (3.981616109s)
	I1117 16:53:56.848110   52453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.981481689s)
	I1117 16:53:56.848157   52453 main.go:141] libmachine: Making call to close driver server
	I1117 16:53:56.848170   52453 main.go:141] libmachine: (calico-081012) Calling .Close
	I1117 16:53:56.848192   52453 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.982110671s)
	I1117 16:53:56.848216   52453 main.go:141] libmachine: Making call to close driver server
	I1117 16:53:56.848232   52453 main.go:141] libmachine: (calico-081012) Calling .Close
	I1117 16:53:56.848266   52453 ssh_runner.go:235] Completed: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/gvisor-addon:2: (3.979594579s)
	I1117 16:53:56.848300   52453 cache_images.go:116] "gcr.io/k8s-minikube/gvisor-addon:2" needs transfer: "gcr.io/k8s-minikube/gvisor-addon:2" does not exist at hash "sha256:850e125fb63f257fc76d51c21942e94e1050fb77e4839965e45e6dba59cc1b95" in container runtime
	I1117 16:53:56.848335   52453 docker.go:323] Removing image: gcr.io/k8s-minikube/gvisor-addon:2
	I1117 16:53:56.848378   52453 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/gvisor-addon:2
	I1117 16:53:56.848556   52453 main.go:141] libmachine: (calico-081012) DBG | Closing plugin on server side
	I1117 16:53:56.848564   52453 main.go:141] libmachine: Successfully made call to close driver server
	I1117 16:53:56.848577   52453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 16:53:56.848583   52453 main.go:141] libmachine: (calico-081012) DBG | Closing plugin on server side
	I1117 16:53:56.848593   52453 main.go:141] libmachine: Making call to close driver server
	I1117 16:53:56.848605   52453 main.go:141] libmachine: (calico-081012) Calling .Close
	I1117 16:53:56.848618   52453 main.go:141] libmachine: Successfully made call to close driver server
	I1117 16:53:56.848633   52453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 16:53:56.848647   52453 main.go:141] libmachine: Making call to close driver server
	I1117 16:53:56.848664   52453 main.go:141] libmachine: (calico-081012) Calling .Close
	I1117 16:53:56.848947   52453 main.go:141] libmachine: Successfully made call to close driver server
	I1117 16:53:56.848961   52453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 16:53:56.849031   52453 node_ready.go:35] waiting up to 15m0s for node "calico-081012" to be "Ready" ...
	I1117 16:53:56.849214   52453 main.go:141] libmachine: (calico-081012) DBG | Closing plugin on server side
	I1117 16:53:56.849212   52453 main.go:141] libmachine: Successfully made call to close driver server
	I1117 16:53:56.849242   52453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 16:53:56.898231   52453 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2
	I1117 16:53:56.898327   52453 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/gvisor-addon_2
	I1117 16:53:56.903462   52453 ssh_runner.go:352] existence check for /var/lib/minikube/images/gvisor-addon_2: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/gvisor-addon_2: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/gvisor-addon_2': No such file or directory
	I1117 16:53:56.903497   52453 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/cache/images/amd64/gcr.io/k8s-minikube/gvisor-addon_2 --> /var/lib/minikube/images/gvisor-addon_2 (89244160 bytes)
	I1117 16:53:57.043322   52453 main.go:141] libmachine: Making call to close driver server
	I1117 16:53:57.043356   52453 main.go:141] libmachine: (calico-081012) Calling .Close
	I1117 16:53:57.043653   52453 main.go:141] libmachine: Successfully made call to close driver server
	I1117 16:53:57.043663   52453 main.go:141] libmachine: (calico-081012) DBG | Closing plugin on server side
	I1117 16:53:57.043673   52453 main.go:141] libmachine: Making call to close connection to plugin binary
	I1117 16:53:57.046560   52453 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1117 16:53:53.622121   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:53.622888   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:53.622909   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:53.622580   54715 retry.go:31] will retry after 1.516896808s: waiting for machine to come up
	I1117 16:53:55.141662   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:55.142508   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:55.142540   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:55.142464   54715 retry.go:31] will retry after 1.790490761s: waiting for machine to come up
	I1117 16:53:56.935211   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:56.935817   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:56.935845   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:56.935756   54715 retry.go:31] will retry after 1.941006094s: waiting for machine to come up
	I1117 16:53:57.352418   52964 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.884625953s)
	I1117 16:53:57.352513   52964 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1117 16:53:57.376349   52964 docker.go:671] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.3
	registry.k8s.io/kube-scheduler:v1.28.3
	registry.k8s.io/kube-controller-manager:v1.28.3
	registry.k8s.io/kube-proxy:v1.28.3
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1117 16:53:57.376378   52964 cache_images.go:84] Images are preloaded, skipping loading
	I1117 16:53:57.376441   52964 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1117 16:53:57.416340   52964 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1117 16:53:57.416390   52964 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1117 16:53:57.416415   52964 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-081012 NodeName:custom-flannel-081012 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1117 16:53:57.416606   52964 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "custom-flannel-081012"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1117 16:53:57.416701   52964 kubeadm.go:976] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/cri-dockerd.sock --hostname-override=custom-flannel-081012 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:custom-flannel-081012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:}
	I1117 16:53:57.416766   52964 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1117 16:53:57.430987   52964 binaries.go:44] Found k8s binaries, skipping transfer
	I1117 16:53:57.431068   52964 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1117 16:53:57.444053   52964 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (383 bytes)
	I1117 16:53:57.465932   52964 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1117 16:53:57.488078   52964 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I1117 16:53:57.511438   52964 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I1117 16:53:57.516638   52964 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1117 16:53:57.533718   52964 certs.go:56] Setting up /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012 for IP: 192.168.39.84
	I1117 16:53:57.533756   52964 certs.go:190] acquiring lock for shared ca certs: {Name:mk3aceff4c1f2ebe72fd3ef81105f56823f7ec42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:53:57.533920   52964 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17634-9353/.minikube/ca.key
	I1117 16:53:57.533972   52964 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17634-9353/.minikube/proxy-client-ca.key
	I1117 16:53:57.534084   52964 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/client.key
	I1117 16:53:57.534142   52964 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.key.2e1821a6
	I1117 16:53:57.534167   52964 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.crt.2e1821a6 with IP's: [192.168.39.84 10.96.0.1 127.0.0.1 10.0.0.1]
	I1117 16:53:57.852562   52964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.crt.2e1821a6 ...
	I1117 16:53:57.852602   52964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.crt.2e1821a6: {Name:mkd54b54dc2a28584ef70dcc4ab0b607181e7ea3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:53:57.852833   52964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.key.2e1821a6 ...
	I1117 16:53:57.852858   52964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.key.2e1821a6: {Name:mk1aaa6d8951a8f035bca81b9b3dc899626f8ff2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:53:57.852971   52964 certs.go:337] copying /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.crt.2e1821a6 -> /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.crt
	I1117 16:53:57.853059   52964 certs.go:341] copying /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.key.2e1821a6 -> /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.key
	I1117 16:53:57.853142   52964 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/proxy-client.key
	I1117 16:53:57.853168   52964 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/proxy-client.crt with IP's: []
	I1117 16:53:57.955984   52964 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/proxy-client.crt ...
	I1117 16:53:57.956029   52964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/proxy-client.crt: {Name:mkee4226fcb0474230fb6093b8a9d58357bd2ec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:53:57.956240   52964 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/proxy-client.key ...
	I1117 16:53:57.956263   52964 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/proxy-client.key: {Name:mkcfce4bb2cdde40978e6aad72f7affd07077ac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 16:53:57.956510   52964 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/home/jenkins/minikube-integration/17634-9353/.minikube/certs/16558.pem (1338 bytes)
	W1117 16:53:57.956573   52964 certs.go:433] ignoring /home/jenkins/minikube-integration/17634-9353/.minikube/certs/home/jenkins/minikube-integration/17634-9353/.minikube/certs/16558_empty.pem, impossibly tiny 0 bytes
	I1117 16:53:57.956591   52964 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca-key.pem (1679 bytes)
	I1117 16:53:57.956622   52964 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/home/jenkins/minikube-integration/17634-9353/.minikube/certs/ca.pem (1082 bytes)
	I1117 16:53:57.956656   52964 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/home/jenkins/minikube-integration/17634-9353/.minikube/certs/cert.pem (1123 bytes)
	I1117 16:53:57.956687   52964 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9353/.minikube/certs/home/jenkins/minikube-integration/17634-9353/.minikube/certs/key.pem (1679 bytes)
	I1117 16:53:57.956750   52964 certs.go:437] found cert: /home/jenkins/minikube-integration/17634-9353/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17634-9353/.minikube/files/etc/ssl/certs/165582.pem (1708 bytes)
	I1117 16:53:57.957551   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1117 16:53:57.986974   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1117 16:53:58.024728   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1117 16:53:58.060542   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/custom-flannel-081012/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1117 16:53:58.090787   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1117 16:53:58.123072   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1117 16:53:58.151735   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1117 16:53:58.178512   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1117 16:53:58.205571   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/certs/16558.pem --> /usr/share/ca-certificates/16558.pem (1338 bytes)
	I1117 16:53:58.229424   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/files/etc/ssl/certs/165582.pem --> /usr/share/ca-certificates/165582.pem (1708 bytes)
	I1117 16:53:58.255223   52964 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17634-9353/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1117 16:53:58.280636   52964 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1117 16:53:58.297259   52964 ssh_runner.go:195] Run: openssl version
	I1117 16:53:58.303215   52964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1117 16:53:58.314341   52964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:53:58.319659   52964 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 17 15:58 /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:53:58.319737   52964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1117 16:53:58.327447   52964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1117 16:53:58.338825   52964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16558.pem && ln -fs /usr/share/ca-certificates/16558.pem /etc/ssl/certs/16558.pem"
	I1117 16:53:58.349684   52964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16558.pem
	I1117 16:53:58.355760   52964 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 17 16:04 /usr/share/ca-certificates/16558.pem
	I1117 16:53:58.355829   52964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16558.pem
	I1117 16:53:58.361591   52964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16558.pem /etc/ssl/certs/51391683.0"
	I1117 16:53:58.372078   52964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/165582.pem && ln -fs /usr/share/ca-certificates/165582.pem /etc/ssl/certs/165582.pem"
	I1117 16:53:58.382728   52964 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/165582.pem
	I1117 16:53:58.387816   52964 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 17 16:04 /usr/share/ca-certificates/165582.pem
	I1117 16:53:58.387881   52964 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/165582.pem
	I1117 16:53:58.395010   52964 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/165582.pem /etc/ssl/certs/3ec20f2e.0"
	I1117 16:53:58.405192   52964 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1117 16:53:58.410191   52964 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1117 16:53:58.410251   52964 kubeadm.go:404] StartCluster: {Name:custom-flannel-081012 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.3 ClusterName:custom-flannel-081012 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 16:53:58.410389   52964 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1117 16:53:58.439024   52964 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1117 16:53:58.453759   52964 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1117 16:53:58.466448   52964 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1117 16:53:58.480673   52964 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1117 16:53:58.480725   52964 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1117 16:53:58.554063   52964 kubeadm.go:322] [init] Using Kubernetes version: v1.28.3
	I1117 16:53:58.554163   52964 kubeadm.go:322] [preflight] Running pre-flight checks
	I1117 16:53:58.752781   52964 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1117 16:53:58.752890   52964 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1117 16:53:58.753005   52964 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1117 16:53:59.180364   52964 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1117 16:53:57.048099   52453 addons.go:502] enable addons completed in 4.709354045s: enabled=[storage-provisioner default-storageclass]
	I1117 16:53:57.598410   52453 docker.go:290] Loading image: /var/lib/minikube/images/gvisor-addon_2
	I1117 16:53:57.598446   52453 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/gvisor-addon_2 | docker load"
	I1117 16:53:59.043090   52453 node_ready.go:58] node "calico-081012" has status "Ready":"False"
	I1117 16:53:59.183551   52964 out.go:204]   - Generating certificates and keys ...
	I1117 16:53:59.183658   52964 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1117 16:53:59.183739   52964 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1117 16:53:59.503242   52964 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1117 16:53:59.845361   52964 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1117 16:53:59.936398   52964 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1117 16:54:00.140906   52964 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1117 16:54:00.511436   52964 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1117 16:54:00.511697   52964 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-081012 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
	I1117 16:54:00.921758   52964 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1117 16:54:00.922052   52964 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-081012 localhost] and IPs [192.168.39.84 127.0.0.1 ::1]
	I1117 16:54:01.196692   52964 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1117 16:54:01.509194   52964 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1117 16:54:01.666030   52964 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1117 16:54:01.666161   52964 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1117 16:54:01.746561   52964 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1117 16:54:01.967632   52964 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1117 16:54:02.503644   52964 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1117 16:54:02.697257   52964 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1117 16:54:02.698241   52964 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1117 16:54:02.700819   52964 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1117 16:53:58.878717   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:53:58.879253   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:53:58.879284   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:53:58.879211   54715 retry.go:31] will retry after 2.681180553s: waiting for machine to come up
	I1117 16:54:01.563264   54653 main.go:141] libmachine: (false-081012) DBG | domain false-081012 has defined MAC address 52:54:00:18:8f:c6 in network mk-false-081012
	I1117 16:54:01.563680   54653 main.go:141] libmachine: (false-081012) DBG | unable to find current IP address of domain false-081012 in network mk-false-081012
	I1117 16:54:01.563701   54653 main.go:141] libmachine: (false-081012) DBG | I1117 16:54:01.563654   54715 retry.go:31] will retry after 3.053913959s: waiting for machine to come up
	
	* 
	* ==> Docker <==
	* -- Journal begins at Fri 2023-11-17 16:46:42 UTC, ends at Fri 2023-11-17 16:54:05 UTC. --
	Nov 17 16:53:00 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:00.932007997Z" level=info msg="shim disconnected" id=004b0fd612ec44256fe4f3c5c0953fb7e0f0c0f8b14ce7bf09eaafbd4c05be5c namespace=moby
	Nov 17 16:53:00 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:00.932081273Z" level=warning msg="cleaning up after shim disconnected" id=004b0fd612ec44256fe4f3c5c0953fb7e0f0c0f8b14ce7bf09eaafbd4c05be5c namespace=moby
	Nov 17 16:53:00 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:00.932095487Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 17 16:53:07 old-k8s-version-055844 dockerd[1203]: time="2023-11-17T16:53:07.434117900Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 17 16:53:07 old-k8s-version-055844 dockerd[1203]: time="2023-11-17T16:53:07.434705762Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 17 16:53:07 old-k8s-version-055844 dockerd[1203]: time="2023-11-17T16:53:07.437779851Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 17 16:53:11 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:11.208053203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 17 16:53:11 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:11.208857877Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 17 16:53:11 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:11.208966950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 17 16:53:11 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:11.208990046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 17 16:53:11 old-k8s-version-055844 dockerd[1203]: time="2023-11-17T16:53:11.631584637Z" level=info msg="ignoring event" container=50be6d420ffb98da5f40a08216318b57fa38136ad23f06f7f02eb93a2e2e4567 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 17 16:53:11 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:11.632527577Z" level=info msg="shim disconnected" id=50be6d420ffb98da5f40a08216318b57fa38136ad23f06f7f02eb93a2e2e4567 namespace=moby
	Nov 17 16:53:11 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:11.632570571Z" level=warning msg="cleaning up after shim disconnected" id=50be6d420ffb98da5f40a08216318b57fa38136ad23f06f7f02eb93a2e2e4567 namespace=moby
	Nov 17 16:53:11 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:11.632578401Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 17 16:53:32 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:32.535525370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Nov 17 16:53:32 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:32.536144787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 17 16:53:32 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:32.536386840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Nov 17 16:53:32 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:32.536405207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Nov 17 16:53:32 old-k8s-version-055844 dockerd[1203]: time="2023-11-17T16:53:32.970075425Z" level=info msg="ignoring event" container=e5abe6b9983d86d036f39f9a2f007f3199b891938b253061a5381023028a3cae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 17 16:53:32 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:32.970076380Z" level=info msg="shim disconnected" id=e5abe6b9983d86d036f39f9a2f007f3199b891938b253061a5381023028a3cae namespace=moby
	Nov 17 16:53:32 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:32.971160148Z" level=warning msg="cleaning up after shim disconnected" id=e5abe6b9983d86d036f39f9a2f007f3199b891938b253061a5381023028a3cae namespace=moby
	Nov 17 16:53:32 old-k8s-version-055844 dockerd[1209]: time="2023-11-17T16:53:32.971496391Z" level=info msg="cleaning up dead shim" namespace=moby
	Nov 17 16:53:34 old-k8s-version-055844 dockerd[1203]: time="2023-11-17T16:53:34.442122887Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 17 16:53:34 old-k8s-version-055844 dockerd[1203]: time="2023-11-17T16:53:34.442693615Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 17 16:53:34 old-k8s-version-055844 dockerd[1203]: time="2023-11-17T16:53:34.446821540Z" level=error msg="Handler for POST /images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	
	* 
	* ==> container status <==
	* time="2023-11-17T16:54:05Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/dockershim.sock\": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
	CONTAINER ID   IMAGE                    COMMAND                  CREATED              STATUS                      PORTS     NAMES
	e5abe6b9983d   a90209bb39e3             "nginx -g 'daemon of…"   33 seconds ago       Exited (1) 32 seconds ago             k8s_dashboard-metrics-scraper_dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard_262d85ab-7391-47cf-b382-8e9e45dbcb21_3
	5070ae462315   kubernetesui/dashboard   "/dashboard --insecu…"   About a minute ago   Up About a minute                     k8s_kubernetes-dashboard_kubernetes-dashboard-84b68f675b-x8jwk_kubernetes-dashboard_6c749da1-3cef-4e72-8236-d8834a7c30db_0
	443a3521c0b2   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard_262d85ab-7391-47cf-b382-8e9e45dbcb21_0
	689abb2046cd   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_metrics-server-74d5856cc6-wrzx4_kube-system_0268e920-138e-4270-bff4-5634aa4671e0_0
	c6ea4230ae89   6e38f40d628d             "/storage-provisioner"   About a minute ago   Up About a minute                     k8s_storage-provisioner_storage-provisioner_kube-system_f83ce743-bf52-473e-9c9b-34e9c4f96dec_0
	086da5b1debf   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kubernetes-dashboard-84b68f675b-x8jwk_kubernetes-dashboard_6c749da1-3cef-4e72-8236-d8834a7c30db_0
	6f0c542e5fd2   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_storage-provisioner_kube-system_f83ce743-bf52-473e-9c9b-34e9c4f96dec_0
	c707b4f4d423   bf261d157914             "/coredns -conf /etc…"   About a minute ago   Up About a minute                     k8s_coredns_coredns-5644d7b6d9-d6skc_kube-system_b84791e6-06c3-4206-8c48-70951894142f_0
	127961a02459   c21b0c7400f9             "/usr/local/bin/kube…"   About a minute ago   Up About a minute                     k8s_kube-proxy_kube-proxy-tbz2n_kube-system_5d4d7425-8266-48f8-b871-4aa0b32b0e92_0
	d1c623137106   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-proxy-tbz2n_kube-system_5d4d7425-8266-48f8-b871-4aa0b32b0e92_0
	8db4c477332d   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_coredns-5644d7b6d9-d6skc_kube-system_b84791e6-06c3-4206-8c48-70951894142f_0
	a147f7c84567   301ddc62b80b             "kube-scheduler --au…"   About a minute ago   Up About a minute                     k8s_kube-scheduler_kube-scheduler-old-k8s-version-055844_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	24e2eee70688   06a629a7e51c             "kube-controller-man…"   About a minute ago   Up About a minute                     k8s_kube-controller-manager_kube-controller-manager-old-k8s-version-055844_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	c22aa8afe54d   b305571ca60a             "kube-apiserver --ad…"   About a minute ago   Up About a minute                     k8s_kube-apiserver_kube-apiserver-old-k8s-version-055844_kube-system_4dbeffb5898cca2c56a4c6367a3c34e3_0
	af3e781bbbc9   b2756210eeab             "etcd --advertise-cl…"   About a minute ago   Up About a minute                     k8s_etcd_etcd-old-k8s-version-055844_kube-system_bfb98b047fc03d556c3372a84c2f5c0d_0
	6a6c145e1281   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-scheduler-old-k8s-version-055844_kube-system_b3d303074fe0ca1d42a8bd9ed248df09_0
	4f3b9fff11f4   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-controller-manager-old-k8s-version-055844_kube-system_7376ddb4f190a0ded9394063437bcb4e_0
	8f55c49c8a59   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_kube-apiserver-old-k8s-version-055844_kube-system_4dbeffb5898cca2c56a4c6367a3c34e3_0
	d8185c32f5b9   k8s.gcr.io/pause:3.1     "/pause"                 About a minute ago   Up About a minute                     k8s_POD_etcd-old-k8s-version-055844_kube-system_bfb98b047fc03d556c3372a84c2f5c0d_0
	
	* 
	* ==> coredns [c707b4f4d423] <==
	* .:53
	2023-11-17T16:52:41.582Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-17T16:52:41.582Z [INFO] CoreDNS-1.6.2
	2023-11-17T16:52:41.582Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	2023-11-17T16:53:04.452Z [INFO] plugin/reload: Running configuration MD5 = 06ff7f9bb57317d7ab02f5fb9baaa00d
	[INFO] Reloading complete
	2023-11-17T16:53:04.464Z [INFO] 127.0.0.1:44617 - 17415 "HINFO IN 5256796578685608658.7864464771204236880. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.010779665s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-055844
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-055844
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=49db7ae766960f8f9e07cffcbe974581755c3ae6
	                    minikube.k8s.io/name=old-k8s-version-055844
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_17T16_52_24_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Nov 2023 16:52:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Nov 2023 16:53:18 +0000   Fri, 17 Nov 2023 16:52:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Nov 2023 16:53:18 +0000   Fri, 17 Nov 2023 16:52:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Nov 2023 16:53:18 +0000   Fri, 17 Nov 2023 16:52:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Nov 2023 16:53:18 +0000   Fri, 17 Nov 2023 16:52:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.169
	  Hostname:    old-k8s-version-055844
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 f3a52e971cbd4dc3aef85db8e0705259
	 System UUID:                f3a52e97-1cbd-4dc3-aef8-5db8e0705259
	 Boot ID:                    835b6f0e-faa3-4880-bc26-b8b6053f2620
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://24.0.7
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-d6skc                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     86s
	  kube-system                etcd-old-k8s-version-055844                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                kube-apiserver-old-k8s-version-055844             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                kube-controller-manager-old-k8s-version-055844    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                kube-proxy-tbz2n                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         86s
	  kube-system                kube-scheduler-old-k8s-version-055844             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25s
	  kube-system                metrics-server-74d5856cc6-wrzx4                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         82s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kubernetes-dashboard       dashboard-metrics-scraper-d6b4b5544-mb2qb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         82s
	  kubernetes-dashboard       kubernetes-dashboard-84b68f675b-x8jwk             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                Message
	  ----    ------                   ----                 ----                                -------
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet, old-k8s-version-055844     Node old-k8s-version-055844 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet, old-k8s-version-055844     Node old-k8s-version-055844 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x7 over 115s)  kubelet, old-k8s-version-055844     Node old-k8s-version-055844 status is now: NodeHasSufficientPID
	  Normal  Starting                 84s                  kube-proxy, old-k8s-version-055844  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov17 16:46] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.071173] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.581131] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.223962] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156809] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.690772] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.149464] systemd-fstab-generator[516]: Ignoring "noauto" for root device
	[  +0.118369] systemd-fstab-generator[527]: Ignoring "noauto" for root device
	[  +1.240478] systemd-fstab-generator[889]: Ignoring "noauto" for root device
	[  +0.315929] systemd-fstab-generator[929]: Ignoring "noauto" for root device
	[  +0.120727] systemd-fstab-generator[940]: Ignoring "noauto" for root device
	[  +0.123113] systemd-fstab-generator[953]: Ignoring "noauto" for root device
	[  +6.184854] systemd-fstab-generator[1194]: Ignoring "noauto" for root device
	[Nov17 16:47] kauditd_printk_skb: 67 callbacks suppressed
	[ +12.781124] systemd-fstab-generator[1673]: Ignoring "noauto" for root device
	[  +0.561682] kauditd_printk_skb: 29 callbacks suppressed
	[  +0.135602] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +23.923623] kauditd_printk_skb: 6 callbacks suppressed
	[Nov17 16:52] systemd-fstab-generator[6866]: Ignoring "noauto" for root device
	[ +44.599514] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [af3e781bbbc9] <==
	* 2023-11-17 16:52:20.515637 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (429.858573ms) to execute
	2023-11-17 16:52:20.515976 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (548.996707ms) to execute
	2023-11-17 16:52:20.785372 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (220.044287ms) to execute
	2023-11-17 16:52:20.785781 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-055844.1798773564be3c03\" " with result "range_response_count:0 size:4" took too long (263.735733ms) to execute
	2023-11-17 16:52:20.786198 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:pod-garbage-collector\" " with result "range_response_count:0 size:4" took too long (262.997558ms) to execute
	2023-11-17 16:52:20.786737 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (200.157789ms) to execute
	2023-11-17 16:52:21.228319 W | etcdserver: request "header:<ID:4723866845604088736 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterrolebindings/system:controller:resourcequota-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/system:controller:resourcequota-controller\" value_size:414 >> failure:<>>" with result "size:14" took too long (121.30148ms) to execute
	2023-11-17 16:52:21.228969 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-055844.1798773564be7eab\" " with result "range_response_count:0 size:4" took too long (188.09056ms) to execute
	2023-11-17 16:52:21.229248 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (163.914696ms) to execute
	2023-11-17 16:52:21.229690 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (144.048787ms) to execute
	2023-11-17 16:52:21.827192 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (241.42872ms) to execute
	2023-11-17 16:52:21.827491 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:service-account-controller\" " with result "range_response_count:0 size:4" took too long (590.651908ms) to execute
	2023-11-17 16:52:21.828118 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (561.667733ms) to execute
	2023-11-17 16:52:21.828895 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (338.798887ms) to execute
	2023-11-17 16:52:21.829729 W | etcdserver: read-only range request "key:\"/registry/namespaces/default\" " with result "range_response_count:0 size:4" took too long (535.208063ms) to execute
	2023-11-17 16:52:22.088983 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:statefulset-controller\" " with result "range_response_count:0 size:4" took too long (194.131967ms) to execute
	2023-11-17 16:52:22.089360 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-055844.1798773564be3c03\" " with result "range_response_count:0 size:4" took too long (195.662289ms) to execute
	2023-11-17 16:52:22.089782 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:4" took too long (117.273475ms) to execute
	2023-11-17 16:52:22.342126 W | etcdserver: read-only range request "key:\"/registry/events/default/old-k8s-version-055844.1798773564be6dac\" " with result "range_response_count:0 size:5" took too long (101.018773ms) to execute
	2023-11-17 16:52:22.342391 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings/system:controller:certificate-controller\" " with result "range_response_count:0 size:5" took too long (182.84953ms) to execute
	2023-11-17 16:52:22.342825 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (177.590861ms) to execute
	2023-11-17 16:52:22.663728 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (109.464518ms) to execute
	2023-11-17 16:52:43.073627 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-84b68f675b-x8jwk\" " with result "range_response_count:1 size:1425" took too long (129.411579ms) to execute
	2023-11-17 16:53:24.272239 W | etcdserver: read-only range request "key:\"/registry/events\" range_end:\"/registry/eventt\" count_only:true " with result "range_response_count:0 size:7" took too long (117.176485ms) to execute
	2023-11-17 16:53:58.339278 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:752" took too long (137.407589ms) to execute
	
	* 
	* ==> kernel <==
	*  16:54:05 up 7 min,  0 users,  load average: 0.63, 0.49, 0.23
	Linux old-k8s-version-055844 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [c22aa8afe54d] <==
	* Trace[953649673]: [538.656438ms] [538.623664ms] END
	I1117 16:52:21.831011       1 trace.go:116] Trace[743459659]: "Create" url:/api/v1/namespaces/default/events (started: 2023-11-17 16:52:21.240502669 +0000 UTC m=+9.173356291) (total time: 590.493012ms):
	Trace[743459659]: [590.493012ms] [590.44313ms] END
	I1117 16:52:21.831419       1 trace.go:116] Trace[981314395]: "Get" url:/apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller (started: 2023-11-17 16:52:21.236011483 +0000 UTC m=+9.168865099) (total time: 595.389871ms):
	Trace[981314395]: [595.389871ms] [595.355081ms] END
	I1117 16:52:22.388419       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1117 16:52:22.666843       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1117 16:52:22.793404       1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.50.169]
	I1117 16:52:22.794641       1 controller.go:606] quota admission added evaluator for: endpoints
	I1117 16:52:23.710040       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I1117 16:52:24.298628       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I1117 16:52:24.572144       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I1117 16:52:39.024214       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I1117 16:52:39.110593       1 controller.go:606] quota admission added evaluator for: events.events.k8s.io
	I1117 16:52:39.233978       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I1117 16:52:44.131884       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1117 16:52:44.132343       1 handler_proxy.go:99] no RequestInfo found in the context
	E1117 16:52:44.132659       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1117 16:52:44.132672       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1117 16:53:44.133276       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1117 16:53:44.133375       1 handler_proxy.go:99] no RequestInfo found in the context
	E1117 16:53:44.133406       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1117 16:53:44.133413       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [24e2eee70688] <==
	* E1117 16:52:42.629899       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.630586       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"d9d44c31-b8dc-453a-93ca-5285a1eaf5a3", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.634787       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"bb7c7e63-642a-4d8c-ad61-09efc551487a", APIVersion:"apps/v1", ResourceVersion:"417", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1117 16:52:42.660400       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1117 16:52:42.661136       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.661203       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"d9d44c31-b8dc-453a-93ca-5285a1eaf5a3", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1117 16:52:42.681403       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1117 16:52:42.681982       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.681997       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"bb7c7e63-642a-4d8c-ad61-09efc551487a", APIVersion:"apps/v1", ResourceVersion:"420", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.682650       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"d9d44c31-b8dc-453a-93ca-5285a1eaf5a3", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1117 16:52:42.695399       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544" failed with pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1117 16:52:42.695985       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.696268       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"d9d44c31-b8dc-453a-93ca-5285a1eaf5a3", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-d6b4b5544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.696393       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"bb7c7e63-642a-4d8c-ad61-09efc551487a", APIVersion:"apps/v1", ResourceVersion:"420", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1117 16:52:42.706685       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.707024       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"bb7c7e63-642a-4d8c-ad61-09efc551487a", APIVersion:"apps/v1", ResourceVersion:"420", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E1117 16:52:42.745693       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-84b68f675b" failed with pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.746053       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"bb7c7e63-642a-4d8c-ad61-09efc551487a", APIVersion:"apps/v1", ResourceVersion:"420", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-84b68f675b-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I1117 16:52:42.840663       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-84b68f675b", UID:"bb7c7e63-642a-4d8c-ad61-09efc551487a", APIVersion:"apps/v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-84b68f675b-x8jwk
	I1117 16:52:43.121071       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-74d5856cc6", UID:"0d5882f7-6024-456e-a1b7-1f30e326fe7c", APIVersion:"apps/v1", ResourceVersion:"370", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-74d5856cc6-wrzx4
	I1117 16:52:43.782888       1 event.go:255] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-d6b4b5544", UID:"d9d44c31-b8dc-453a-93ca-5285a1eaf5a3", APIVersion:"apps/v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-d6b4b5544-mb2qb
	E1117 16:53:09.649537       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1117 16:53:11.490703       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1117 16:53:39.902099       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1117 16:53:43.492673       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [127961a02459] <==
	* W1117 16:52:41.306232       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1117 16:52:41.378057       1 node.go:135] Successfully retrieved node IP: 192.168.50.169
	I1117 16:52:41.378105       1 server_others.go:149] Using iptables Proxier.
	I1117 16:52:41.378984       1 server.go:529] Version: v1.16.0
	I1117 16:52:41.380752       1 config.go:313] Starting service config controller
	I1117 16:52:41.380774       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1117 16:52:41.384576       1 config.go:131] Starting endpoints config controller
	I1117 16:52:41.384603       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1117 16:52:41.486181       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1117 16:52:41.486301       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [a147f7c84567] <==
	* W1117 16:52:17.541075       1 authentication.go:79] Authentication is disabled
	I1117 16:52:17.541089       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1117 16:52:17.545790       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1117 16:52:17.642794       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1117 16:52:17.645762       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1117 16:52:17.646516       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1117 16:52:17.646899       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1117 16:52:17.647115       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1117 16:52:17.648559       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1117 16:52:17.653296       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1117 16:52:17.653742       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1117 16:52:17.653855       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1117 16:52:17.653902       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1117 16:52:17.653942       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1117 16:52:18.644587       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1117 16:52:18.648232       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1117 16:52:18.649913       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1117 16:52:18.651888       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1117 16:52:18.652930       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1117 16:52:18.653756       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1117 16:52:18.657359       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1117 16:52:18.657772       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1117 16:52:18.660003       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1117 16:52:18.662685       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1117 16:52:18.662994       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Fri 2023-11-17 16:46:42 UTC, ends at Fri 2023-11-17 16:54:05 UTC. --
	Nov 17 16:53:02 old-k8s-version-055844 kubelet[6884]: E1117 16:53:02.356663    6884 pod_workers.go:191] Error syncing pod 262d85ab-7391-47cf-b382-8e9e45dbcb21 ("dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"
	Nov 17 16:53:07 old-k8s-version-055844 kubelet[6884]: E1117 16:53:07.438499    6884 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 17 16:53:07 old-k8s-version-055844 kubelet[6884]: E1117 16:53:07.438541    6884 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 17 16:53:07 old-k8s-version-055844 kubelet[6884]: E1117 16:53:07.438605    6884 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 17 16:53:07 old-k8s-version-055844 kubelet[6884]: E1117 16:53:07.438636    6884 pod_workers.go:191] Error syncing pod 0268e920-138e-4270-bff4-5634aa4671e0 ("metrics-server-74d5856cc6-wrzx4_kube-system(0268e920-138e-4270-bff4-5634aa4671e0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 17 16:53:11 old-k8s-version-055844 kubelet[6884]: W1117 16:53:11.436594    6884 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-mb2qb through plugin: invalid network status for
	Nov 17 16:53:11 old-k8s-version-055844 kubelet[6884]: W1117 16:53:11.674071    6884 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod262d85ab-7391-47cf-b382-8e9e45dbcb21/50be6d420ffb98da5f40a08216318b57fa38136ad23f06f7f02eb93a2e2e4567": none of the resources are being tracked.
	Nov 17 16:53:12 old-k8s-version-055844 kubelet[6884]: W1117 16:53:12.629630    6884 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-mb2qb through plugin: invalid network status for
	Nov 17 16:53:12 old-k8s-version-055844 kubelet[6884]: E1117 16:53:12.637130    6884 pod_workers.go:191] Error syncing pod 262d85ab-7391-47cf-b382-8e9e45dbcb21 ("dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"
	Nov 17 16:53:13 old-k8s-version-055844 kubelet[6884]: W1117 16:53:13.651563    6884 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-mb2qb through plugin: invalid network status for
	Nov 17 16:53:20 old-k8s-version-055844 kubelet[6884]: E1117 16:53:20.408951    6884 pod_workers.go:191] Error syncing pod 0268e920-138e-4270-bff4-5634aa4671e0 ("metrics-server-74d5856cc6-wrzx4_kube-system(0268e920-138e-4270-bff4-5634aa4671e0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 17 16:53:21 old-k8s-version-055844 kubelet[6884]: E1117 16:53:21.103321    6884 pod_workers.go:191] Error syncing pod 262d85ab-7391-47cf-b382-8e9e45dbcb21 ("dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"
	Nov 17 16:53:32 old-k8s-version-055844 kubelet[6884]: W1117 16:53:32.826749    6884 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-mb2qb through plugin: invalid network status for
	Nov 17 16:53:33 old-k8s-version-055844 kubelet[6884]: W1117 16:53:33.024929    6884 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod262d85ab-7391-47cf-b382-8e9e45dbcb21/e5abe6b9983d86d036f39f9a2f007f3199b891938b253061a5381023028a3cae": none of the resources are being tracked.
	Nov 17 16:53:33 old-k8s-version-055844 kubelet[6884]: W1117 16:53:33.967399    6884 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-mb2qb through plugin: invalid network status for
	Nov 17 16:53:33 old-k8s-version-055844 kubelet[6884]: E1117 16:53:33.976683    6884 pod_workers.go:191] Error syncing pod 262d85ab-7391-47cf-b382-8e9e45dbcb21 ("dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"
	Nov 17 16:53:34 old-k8s-version-055844 kubelet[6884]: E1117 16:53:34.447878    6884 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 17 16:53:34 old-k8s-version-055844 kubelet[6884]: E1117 16:53:34.448063    6884 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 17 16:53:34 old-k8s-version-055844 kubelet[6884]: E1117 16:53:34.448175    6884 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host
	Nov 17 16:53:34 old-k8s-version-055844 kubelet[6884]: E1117 16:53:34.448288    6884 pod_workers.go:191] Error syncing pod 0268e920-138e-4270-bff4-5634aa4671e0 ("metrics-server-74d5856cc6-wrzx4_kube-system(0268e920-138e-4270-bff4-5634aa4671e0)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.122.1:53: no such host"
	Nov 17 16:53:34 old-k8s-version-055844 kubelet[6884]: W1117 16:53:34.992603    6884 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-d6b4b5544-mb2qb through plugin: invalid network status for
	Nov 17 16:53:41 old-k8s-version-055844 kubelet[6884]: E1117 16:53:41.102715    6884 pod_workers.go:191] Error syncing pod 262d85ab-7391-47cf-b382-8e9e45dbcb21 ("dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"
	Nov 17 16:53:45 old-k8s-version-055844 kubelet[6884]: E1117 16:53:45.407985    6884 pod_workers.go:191] Error syncing pod 0268e920-138e-4270-bff4-5634aa4671e0 ("metrics-server-74d5856cc6-wrzx4_kube-system(0268e920-138e-4270-bff4-5634aa4671e0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 17 16:53:54 old-k8s-version-055844 kubelet[6884]: E1117 16:53:54.405960    6884 pod_workers.go:191] Error syncing pod 262d85ab-7391-47cf-b382-8e9e45dbcb21 ("dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-d6b4b5544-mb2qb_kubernetes-dashboard(262d85ab-7391-47cf-b382-8e9e45dbcb21)"
	Nov 17 16:53:58 old-k8s-version-055844 kubelet[6884]: E1117 16:53:58.412508    6884 pod_workers.go:191] Error syncing pod 0268e920-138e-4270-bff4-5634aa4671e0 ("metrics-server-74d5856cc6-wrzx4_kube-system(0268e920-138e-4270-bff4-5634aa4671e0)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [5070ae462315] <==
	* 2023/11/17 16:52:52 Starting overwatch
	2023/11/17 16:52:52 Using namespace: kubernetes-dashboard
	2023/11/17 16:52:52 Using in-cluster config to connect to apiserver
	2023/11/17 16:52:52 Using secret token for csrf signing
	2023/11/17 16:52:52 Initializing csrf token from kubernetes-dashboard-csrf secret
	2023/11/17 16:52:52 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2023/11/17 16:52:52 Successful initial request to the apiserver, version: v1.16.0
	2023/11/17 16:52:52 Generating JWE encryption key
	2023/11/17 16:52:52 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2023/11/17 16:52:52 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2023/11/17 16:52:53 Initializing JWE encryption key from synchronized object
	2023/11/17 16:52:53 Creating in-cluster Sidecar client
	2023/11/17 16:52:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/17 16:52:53 Serving insecurely on HTTP port: 9090
	2023/11/17 16:53:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2023/11/17 16:53:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	* 
	* ==> storage-provisioner [c6ea4230ae89] <==
	* I1117 16:52:43.654230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1117 16:52:43.709787       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1117 16:52:43.709937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1117 16:52:43.731818       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1117 16:52:43.733000       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-055844_509432e8-6dfb-45af-b8f9-f6c0a29eef09!
	I1117 16:52:43.735405       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"975394a3-8426-4ffd-864f-40861b5272b4", APIVersion:"v1", ResourceVersion:"468", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-055844_509432e8-6dfb-45af-b8f9-f6c0a29eef09 became leader
	I1117 16:52:43.834238       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-055844_509432e8-6dfb-45af-b8f9-f6c0a29eef09!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-055844 -n old-k8s-version-055844
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-055844 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-wrzx4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-055844 describe pod metrics-server-74d5856cc6-wrzx4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-055844 describe pod metrics-server-74d5856cc6-wrzx4: exit status 1 (76.854542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-wrzx4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-055844 describe pod metrics-server-74d5856cc6-wrzx4: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (2.58s)

                                                
                                    

Test pass (283/321)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 44.56
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.3/json-events 28.08
11 TestDownloadOnly/v1.28.3/preload-exists 0
15 TestDownloadOnly/v1.28.3/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.58
20 TestOffline 129.3
23 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
24 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
25 TestAddons/Setup 160.76
27 TestAddons/parallel/Registry 16.67
28 TestAddons/parallel/Ingress 25.93
29 TestAddons/parallel/InspektorGadget 10.83
30 TestAddons/parallel/MetricsServer 5.82
31 TestAddons/parallel/HelmTiller 12.67
33 TestAddons/parallel/CSI 68.13
34 TestAddons/parallel/Headlamp 16.67
35 TestAddons/parallel/CloudSpanner 5.82
36 TestAddons/parallel/LocalPath 55.24
37 TestAddons/parallel/NvidiaDevicePlugin 5.71
40 TestAddons/serial/GCPAuth/Namespaces 0.13
41 TestAddons/StoppedEnableDisable 13.41
42 TestCertOptions 85.81
43 TestCertExpiration 290.16
44 TestDockerFlags 59.25
45 TestForceSystemdFlag 52.59
46 TestForceSystemdEnv 85.94
48 TestKVMDriverInstallOrUpdate 4.88
53 TestErrorSpam/start 0.39
54 TestErrorSpam/status 0.7
55 TestErrorSpam/pause 4.51
56 TestErrorSpam/unpause 5.77
57 TestErrorSpam/stop 106.97
60 TestFunctional/serial/CopySyncFile 0
61 TestFunctional/serial/StartWithProxy 65.47
62 TestFunctional/serial/AuditLog 0
63 TestFunctional/serial/SoftStart 39.07
64 TestFunctional/serial/KubeContext 0.04
65 TestFunctional/serial/KubectlGetPods 0.07
68 TestFunctional/serial/CacheCmd/cache/add_remote 3.79
69 TestFunctional/serial/CacheCmd/cache/add_local 1.74
70 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
71 TestFunctional/serial/CacheCmd/cache/list 0.07
72 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
73 TestFunctional/serial/CacheCmd/cache/cache_reload 1.61
74 TestFunctional/serial/CacheCmd/cache/delete 0.12
75 TestFunctional/serial/MinikubeKubectlCmd 0.12
76 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
77 TestFunctional/serial/ExtraConfig 41.9
78 TestFunctional/serial/ComponentHealth 0.07
79 TestFunctional/serial/LogsCmd 1.1
80 TestFunctional/serial/LogsFileCmd 1.16
81 TestFunctional/serial/InvalidService 4.31
83 TestFunctional/parallel/ConfigCmd 0.44
84 TestFunctional/parallel/DashboardCmd 26.67
85 TestFunctional/parallel/DryRun 0.29
86 TestFunctional/parallel/InternationalLanguage 0.16
87 TestFunctional/parallel/StatusCmd 1
91 TestFunctional/parallel/ServiceCmdConnect 10.53
92 TestFunctional/parallel/AddonsCmd 0.17
93 TestFunctional/parallel/PersistentVolumeClaim 56.8
95 TestFunctional/parallel/SSHCmd 0.54
96 TestFunctional/parallel/CpCmd 1.07
97 TestFunctional/parallel/MySQL 34.28
98 TestFunctional/parallel/FileSync 0.4
99 TestFunctional/parallel/CertSync 1.39
103 TestFunctional/parallel/NodeLabels 0.07
105 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
107 TestFunctional/parallel/License 0.88
108 TestFunctional/parallel/ServiceCmd/DeployApp 14.3
109 TestFunctional/parallel/Version/short 0.06
110 TestFunctional/parallel/Version/components 0.67
111 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
112 TestFunctional/parallel/ImageCommands/ImageListTable 0.35
113 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
114 TestFunctional/parallel/ImageCommands/ImageListYaml 0.38
115 TestFunctional/parallel/ImageCommands/ImageBuild 3.74
116 TestFunctional/parallel/ImageCommands/Setup 2.06
126 TestFunctional/parallel/DockerEnv/bash 0.89
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.32
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
131 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.46
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.47
133 TestFunctional/parallel/ServiceCmd/List 0.32
134 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.35
136 TestFunctional/parallel/ServiceCmd/Format 0.35
137 TestFunctional/parallel/ProfileCmd/profile_not_create 0.34
138 TestFunctional/parallel/ServiceCmd/URL 0.37
139 TestFunctional/parallel/ProfileCmd/profile_list 0.37
140 TestFunctional/parallel/MountCmd/any-port 27.13
141 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
142 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2
143 TestFunctional/parallel/ImageCommands/ImageRemove 1.14
144 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.62
145 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 2.09
146 TestFunctional/parallel/MountCmd/specific-port 1.87
147 TestFunctional/parallel/MountCmd/VerifyCleanup 1.68
148 TestFunctional/delete_addon-resizer_images 0.07
149 TestFunctional/delete_my-image_image 0.01
150 TestFunctional/delete_minikube_cached_images 0.01
151 TestGvisorAddon 377.27
154 TestImageBuild/serial/Setup 52.14
155 TestImageBuild/serial/NormalBuild 2.33
156 TestImageBuild/serial/BuildWithBuildArg 1.41
157 TestImageBuild/serial/BuildWithDockerIgnore 0.41
158 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.29
161 TestIngressAddonLegacy/StartLegacyK8sCluster 92.65
163 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 17.44
164 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.52
165 TestIngressAddonLegacy/serial/ValidateIngressAddons 47.68
168 TestJSONOutput/start/Command 66.79
169 TestJSONOutput/start/Audit 0
171 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
174 TestJSONOutput/pause/Command 0.57
175 TestJSONOutput/pause/Audit 0
177 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
178 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
180 TestJSONOutput/unpause/Command 0.54
181 TestJSONOutput/unpause/Audit 0
183 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/stop/Command 7.41
187 TestJSONOutput/stop/Audit 0
189 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
191 TestErrorJSONOutput 0.22
196 TestMainNoArgs 0.06
197 TestMinikubeProfile 105.08
200 TestMountStart/serial/StartWithMountFirst 31.32
201 TestMountStart/serial/VerifyMountFirst 0.41
202 TestMountStart/serial/StartWithMountSecond 32.39
203 TestMountStart/serial/VerifyMountSecond 0.46
204 TestMountStart/serial/DeleteFirst 1.07
205 TestMountStart/serial/VerifyMountPostDelete 0.41
206 TestMountStart/serial/Stop 2.09
207 TestMountStart/serial/RestartStopped 25.92
208 TestMountStart/serial/VerifyMountPostStop 0.4
211 TestMultiNode/serial/FreshStart2Nodes 180.07
212 TestMultiNode/serial/DeployApp2Nodes 4.8
213 TestMultiNode/serial/PingHostFrom2Pods 0.9
214 TestMultiNode/serial/AddNode 46.18
215 TestMultiNode/serial/ProfileList 0.22
216 TestMultiNode/serial/CopyFile 7.65
217 TestMultiNode/serial/StopNode 3.38
218 TestMultiNode/serial/StartAfterStop 31.15
219 TestMultiNode/serial/RestartKeepsNodes 171.08
220 TestMultiNode/serial/DeleteNode 1.78
221 TestMultiNode/serial/StopMultiNode 25.56
222 TestMultiNode/serial/RestartMultiNode 106.83
223 TestMultiNode/serial/ValidateNameConflict 51.26
228 TestPreload 230.2
230 TestScheduledStopUnix 121.15
231 TestSkaffold 143.53
234 TestRunningBinaryUpgrade 191.86
236 TestKubernetesUpgrade 200.7
238 TestStoppedBinaryUpgrade/Setup 1.7
239 TestStoppedBinaryUpgrade/Upgrade 236.56
248 TestPause/serial/Start 118.74
249 TestStoppedBinaryUpgrade/MinikubeLogs 1.38
251 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
252 TestNoKubernetes/serial/StartWithK8s 59.57
253 TestPause/serial/SecondStartNoReconfiguration 59.49
265 TestNoKubernetes/serial/StartWithStopK8s 32.3
266 TestPause/serial/Pause 0.61
267 TestPause/serial/VerifyStatus 0.3
268 TestPause/serial/Unpause 0.61
269 TestPause/serial/PauseAgain 0.69
270 TestPause/serial/DeletePaused 1.06
271 TestPause/serial/VerifyDeletedResources 0.68
272 TestNoKubernetes/serial/Start 78.75
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.24
274 TestNoKubernetes/serial/ProfileList 1.12
275 TestNoKubernetes/serial/Stop 2.2
276 TestNoKubernetes/serial/StartNoArgs 47.22
278 TestStartStop/group/old-k8s-version/serial/FirstStart 164.88
279 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
281 TestStartStop/group/no-preload/serial/FirstStart 144.49
283 TestStartStop/group/embed-certs/serial/FirstStart 119.5
285 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.86
286 TestStartStop/group/old-k8s-version/serial/DeployApp 10.54
287 TestStartStop/group/embed-certs/serial/DeployApp 10.48
288 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
289 TestStartStop/group/old-k8s-version/serial/Stop 13.37
290 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 4.03
291 TestStartStop/group/embed-certs/serial/Stop 13.15
292 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
293 TestStartStop/group/old-k8s-version/serial/SecondStart 443.81
294 TestStartStop/group/no-preload/serial/DeployApp 9.58
295 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
296 TestStartStop/group/embed-certs/serial/SecondStart 342.67
297 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.31
298 TestStartStop/group/no-preload/serial/Stop 13.14
299 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.29
301 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
302 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
303 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.13
304 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.26
305 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 310.11
311 TestStartStop/group/newest-cni/serial/FirstStart 82.46
312 TestStartStop/group/newest-cni/serial/DeployApp 0
313 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
314 TestStartStop/group/newest-cni/serial/Stop 13.13
315 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
316 TestStartStop/group/newest-cni/serial/SecondStart 45.73
317 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
318 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
319 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
320 TestStartStop/group/newest-cni/serial/Pause 2.46
321 TestNetworkPlugins/group/auto/Start 69.17
322 TestNetworkPlugins/group/auto/KubeletFlags 0.21
323 TestNetworkPlugins/group/auto/NetCatPod 12.38
324 TestNetworkPlugins/group/auto/DNS 0.18
325 TestNetworkPlugins/group/auto/Localhost 0.15
326 TestNetworkPlugins/group/auto/HairPin 0.15
327 TestNetworkPlugins/group/kindnet/Start 82.92
328 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 16.03
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
330 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 5.03
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
332 TestStartStop/group/embed-certs/serial/Pause 3.1
333 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
334 TestNetworkPlugins/group/calico/Start 104.66
335 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
336 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.6
337 TestNetworkPlugins/group/custom-flannel/Start 108.19
338 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
339 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
340 TestNetworkPlugins/group/kindnet/NetCatPod 11.42
341 TestNetworkPlugins/group/kindnet/DNS 0.18
342 TestNetworkPlugins/group/kindnet/Localhost 0.15
343 TestNetworkPlugins/group/kindnet/HairPin 0.18
344 TestNetworkPlugins/group/false/Start 86.55
345 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
346 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
348 TestStartStop/group/old-k8s-version/serial/Pause 2.86
349 TestNetworkPlugins/group/enable-default-cni/Start 85.75
350 TestNetworkPlugins/group/calico/ControllerPod 5.03
351 TestNetworkPlugins/group/calico/KubeletFlags 0.25
352 TestNetworkPlugins/group/calico/NetCatPod 13.49
353 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
354 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.43
355 TestNetworkPlugins/group/calico/DNS 0.4
356 TestNetworkPlugins/group/calico/Localhost 0.2
357 TestNetworkPlugins/group/calico/HairPin 0.19
358 TestNetworkPlugins/group/custom-flannel/DNS 0.26
359 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
360 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
361 TestNetworkPlugins/group/false/KubeletFlags 0.28
362 TestNetworkPlugins/group/false/NetCatPod 13.51
363 TestNetworkPlugins/group/flannel/Start 85.63
364 TestNetworkPlugins/group/bridge/Start 97.88
365 TestNetworkPlugins/group/false/DNS 0.21
366 TestNetworkPlugins/group/false/Localhost 0.15
367 TestNetworkPlugins/group/false/HairPin 0.18
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.36
370 TestNetworkPlugins/group/kubenet/Start 101.82
371 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
372 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
373 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
374 TestNetworkPlugins/group/flannel/ControllerPod 5.02
375 TestNetworkPlugins/group/flannel/KubeletFlags 0.5
376 TestNetworkPlugins/group/flannel/NetCatPod 14.34
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.24
378 TestNetworkPlugins/group/bridge/NetCatPod 12.42
379 TestNetworkPlugins/group/flannel/DNS 0.21
380 TestNetworkPlugins/group/flannel/Localhost 0.18
381 TestNetworkPlugins/group/flannel/HairPin 0.18
382 TestNetworkPlugins/group/bridge/DNS 0.2
383 TestNetworkPlugins/group/bridge/Localhost 0.16
384 TestNetworkPlugins/group/bridge/HairPin 0.17
385 TestNetworkPlugins/group/kubenet/KubeletFlags 0.24
386 TestNetworkPlugins/group/kubenet/NetCatPod 12.36
387 TestNetworkPlugins/group/kubenet/DNS 0.18
388 TestNetworkPlugins/group/kubenet/Localhost 0.15
389 TestNetworkPlugins/group/kubenet/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (44.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-520264 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-520264 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=kvm2 : (44.563609645s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (44.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-520264
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-520264: exit status 85 (74.187882ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-520264 | jenkins | v1.32.0 | 17 Nov 23 15:56 UTC |          |
	|         | -p download-only-520264        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/17 15:56:52
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 15:56:52.841473   16570 out.go:296] Setting OutFile to fd 1 ...
	I1117 15:56:52.841583   16570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:56:52.841605   16570 out.go:309] Setting ErrFile to fd 2...
	I1117 15:56:52.841610   16570 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:56:52.841792   16570 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	W1117 15:56:52.841923   16570 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17634-9353/.minikube/config/config.json: open /home/jenkins/minikube-integration/17634-9353/.minikube/config/config.json: no such file or directory
	I1117 15:56:52.842524   16570 out.go:303] Setting JSON to true
	I1117 15:56:52.843353   16570 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2359,"bootTime":1700234254,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 15:56:52.843419   16570 start.go:138] virtualization: kvm guest
	I1117 15:56:52.845866   16570 out.go:97] [download-only-520264] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 15:56:52.847421   16570 out.go:169] MINIKUBE_LOCATION=17634
	W1117 15:56:52.846000   16570 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17634-9353/.minikube/cache/preloaded-tarball: no such file or directory
	I1117 15:56:52.846021   16570 notify.go:220] Checking for updates...
	I1117 15:56:52.850553   16570 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 15:56:52.852034   16570 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	I1117 15:56:52.853447   16570 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	I1117 15:56:52.854908   16570 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1117 15:56:52.857511   16570 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1117 15:56:52.857726   16570 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 15:56:52.957261   16570 out.go:97] Using the kvm2 driver based on user configuration
	I1117 15:56:52.957290   16570 start.go:298] selected driver: kvm2
	I1117 15:56:52.957296   16570 start.go:902] validating driver "kvm2" against <nil>
	I1117 15:56:52.957621   16570 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 15:56:52.957743   16570 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17634-9353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 15:56:52.972812   16570 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1117 15:56:52.972876   16570 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1117 15:56:52.973288   16570 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1117 15:56:52.973419   16570 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1117 15:56:52.973461   16570 cni.go:84] Creating CNI manager for ""
	I1117 15:56:52.973474   16570 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I1117 15:56:52.973479   16570 start_flags.go:323] config:
	{Name:download-only-520264 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-520264 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 15:56:52.973708   16570 iso.go:125] acquiring lock: {Name:mkfd0387d5051e05351c5f239ccf79a882c64dcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 15:56:52.975685   16570 out.go:97] Downloading VM boot image ...
	I1117 15:56:52.975715   16570 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1117 15:57:01.720972   16570 out.go:97] Starting control plane node download-only-520264 in cluster download-only-520264
	I1117 15:57:01.721003   16570 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1117 15:57:01.818076   16570 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1117 15:57:01.818123   16570 cache.go:56] Caching tarball of preloaded images
	I1117 15:57:01.818279   16570 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1117 15:57:01.820575   16570 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1117 15:57:01.820595   16570 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 15:57:01.927240   16570 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1117 15:57:15.267509   16570 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 15:57:15.267624   16570 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17634-9353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1117 15:57:16.118390   16570 cache.go:59] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1117 15:57:16.118763   16570 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/download-only-520264/config.json ...
	I1117 15:57:16.118801   16570 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/download-only-520264/config.json: {Name:mkc9ada24a495d0f35392cc5f45a238c583650a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1117 15:57:16.118982   16570 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1117 15:57:16.119211   16570 download.go:107] Downloading: https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.16.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/linux/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-520264"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/json-events (28.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-520264 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-520264 --force --alsologtostderr --kubernetes-version=v1.28.3 --container-runtime=docker --driver=kvm2 : (28.079814701s)
--- PASS: TestDownloadOnly/v1.28.3/json-events (28.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/preload-exists
--- PASS: TestDownloadOnly/v1.28.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-520264
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-520264: exit status 85 (70.326342ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-520264 | jenkins | v1.32.0 | 17 Nov 23 15:56 UTC |          |
	|         | -p download-only-520264        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-520264 | jenkins | v1.32.0 | 17 Nov 23 15:57 UTC |          |
	|         | -p download-only-520264        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/17 15:57:37
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1117 15:57:37.483465   16705 out.go:296] Setting OutFile to fd 1 ...
	I1117 15:57:37.483632   16705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:57:37.483645   16705 out.go:309] Setting ErrFile to fd 2...
	I1117 15:57:37.483654   16705 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 15:57:37.483849   16705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	W1117 15:57:37.483966   16705 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17634-9353/.minikube/config/config.json: open /home/jenkins/minikube-integration/17634-9353/.minikube/config/config.json: no such file or directory
	I1117 15:57:37.484364   16705 out.go:303] Setting JSON to true
	I1117 15:57:37.485144   16705 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2404,"bootTime":1700234254,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 15:57:37.485200   16705 start.go:138] virtualization: kvm guest
	I1117 15:57:37.487320   16705 out.go:97] [download-only-520264] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 15:57:37.489041   16705 out.go:169] MINIKUBE_LOCATION=17634
	I1117 15:57:37.487478   16705 notify.go:220] Checking for updates...
	I1117 15:57:37.492209   16705 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 15:57:37.493901   16705 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	I1117 15:57:37.495464   16705 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	I1117 15:57:37.496955   16705 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1117 15:57:37.499716   16705 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1117 15:57:37.500188   16705 config.go:182] Loaded profile config "download-only-520264": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1117 15:57:37.500240   16705 start.go:810] api.Load failed for download-only-520264: filestore "download-only-520264": Docker machine "download-only-520264" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 15:57:37.500320   16705 driver.go:378] Setting default libvirt URI to qemu:///system
	W1117 15:57:37.500363   16705 start.go:810] api.Load failed for download-only-520264: filestore "download-only-520264": Docker machine "download-only-520264" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1117 15:57:37.532781   16705 out.go:97] Using the kvm2 driver based on existing profile
	I1117 15:57:37.532811   16705 start.go:298] selected driver: kvm2
	I1117 15:57:37.532818   16705 start.go:902] validating driver "kvm2" against &{Name:download-only-520264 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-520264 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 15:57:37.533327   16705 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 15:57:37.533404   16705 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17634-9353/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1117 15:57:37.547842   16705 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1117 15:57:37.548631   16705 cni.go:84] Creating CNI manager for ""
	I1117 15:57:37.548652   16705 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1117 15:57:37.548667   16705 start_flags.go:323] config:
	{Name:download-only-520264 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:download-only-520264 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 15:57:37.548854   16705 iso.go:125] acquiring lock: {Name:mkfd0387d5051e05351c5f239ccf79a882c64dcb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1117 15:57:37.550719   16705 out.go:97] Starting control plane node download-only-520264 in cluster download-only-520264
	I1117 15:57:37.550732   16705 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1117 15:57:37.900392   16705 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1117 15:57:37.900436   16705 cache.go:56] Caching tarball of preloaded images
	I1117 15:57:37.900598   16705 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1117 15:57:37.902439   16705 out.go:97] Downloading Kubernetes v1.28.3 preload ...
	I1117 15:57:37.902463   16705 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1117 15:57:38.005022   16705 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4?checksum=md5:82104bbf889ff8b69d5c141ce86c05ac -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4
	I1117 15:57:49.222610   16705 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1117 15:57:49.222701   16705 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17634-9353/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-docker-overlay2-amd64.tar.lz4 ...
	I1117 15:57:50.196390   16705 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on docker
	I1117 15:57:50.196508   16705 profile.go:148] Saving config to /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/download-only-520264/config.json ...
	I1117 15:57:50.196709   16705 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime docker
	I1117 15:57:50.196895   16705 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.3/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17634-9353/.minikube/cache/linux/amd64/v1.28.3/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-520264"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-520264
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-619535 --alsologtostderr --binary-mirror http://127.0.0.1:40879 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-619535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-619535
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (129.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-926589 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-926589 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m8.231583199s)
helpers_test.go:175: Cleaning up "offline-docker-926589" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-926589
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-926589: (1.069748915s)
--- PASS: TestOffline (129.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-051402
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-051402: exit status 85 (61.524911ms)

                                                
                                                
-- stdout --
	* Profile "addons-051402" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-051402"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-051402
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-051402: exit status 85 (62.160091ms)

                                                
                                                
-- stdout --
	* Profile "addons-051402" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-051402"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (160.76s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-051402 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-051402 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m40.761843834s)
--- PASS: TestAddons/Setup (160.76s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 24.090384ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-wbk74" [198bd744-0fc1-4a4d-9a86-1c7d40b0e1cb] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.018258134s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-5v82x" [959d44e8-58fe-4273-bf2a-1e64eab9693a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.016067876s
addons_test.go:339: (dbg) Run:  kubectl --context addons-051402 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-051402 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-051402 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.631484615s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 ip
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (25.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-051402 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-051402 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-051402 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [7783e723-3161-4cf2-9429-e285f5e5adcc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [7783e723-3161-4cf2-9429-e285f5e5adcc] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.020565227s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context addons-051402 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.172
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-051402 addons disable ingress-dns --alsologtostderr -v=1: (1.250385829s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-051402 addons disable ingress --alsologtostderr -v=1: (7.691877199s)
--- PASS: TestAddons/parallel/Ingress (25.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-mcxqb" [46cd39f6-b079-46bc-8a0f-c8cf5c4013d7] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.013251936s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-051402
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-051402: (5.816130393s)
--- PASS: TestAddons/parallel/InspektorGadget (10.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 5.440853ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-z9zsl" [168cb3ad-e305-464c-9c8f-5db6c0e0cce1] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.034581492s
addons_test.go:414: (dbg) Run:  kubectl --context addons-051402 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.82s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.67s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.167942ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-qg8pz" [fab9df68-e00f-4be8-bc7c-bfc7516a9f83] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.013831971s
addons_test.go:472: (dbg) Run:  kubectl --context addons-051402 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-051402 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.007180442s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (68.13s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 24.721449ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-051402 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
2023/11/17 16:01:03 [DEBUG] GET http://192.168.39.172:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-051402 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fbdc9fa1-3451-4889-b6ee-7faec80fbc7d] Pending
helpers_test.go:344: "task-pv-pod" [fbdc9fa1-3451-4889-b6ee-7faec80fbc7d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fbdc9fa1-3451-4889-b6ee-7faec80fbc7d] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 17.020156862s
addons_test.go:583: (dbg) Run:  kubectl --context addons-051402 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-051402 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-051402 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-051402 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-051402 delete pod task-pv-pod
addons_test.go:593: (dbg) Done: kubectl --context addons-051402 delete pod task-pv-pod: (1.034142901s)
addons_test.go:599: (dbg) Run:  kubectl --context addons-051402 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-051402 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-051402 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0536182f-d7a0-4a77-ba20-b9473db224e6] Pending
helpers_test.go:344: "task-pv-pod-restore" [0536182f-d7a0-4a77-ba20-b9473db224e6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0536182f-d7a0-4a77-ba20-b9473db224e6] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.018839848s
addons_test.go:625: (dbg) Run:  kubectl --context addons-051402 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-051402 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-051402 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-051402 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.654910002s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (68.13s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-051402 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-051402 --alsologtostderr -v=1: (1.621340345s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-kcgjp" [4e14c381-8ce1-43d6-b321-196def7099e2] Pending
helpers_test.go:344: "headlamp-777fd4b855-kcgjp" [4e14c381-8ce1-43d6-b321-196def7099e2] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-kcgjp" [4e14c381-8ce1-43d6-b321-196def7099e2] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.046661833s
--- PASS: TestAddons/parallel/Headlamp (16.67s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-fj98q" [3f15c33c-f762-4548-83d1-8ed9297660ca] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.015631753s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-051402
--- PASS: TestAddons/parallel/CloudSpanner (5.82s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.24s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-051402 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-051402 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-051402 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f8a6be90-365d-4bdf-8485-681b1bd86dae] Pending
helpers_test.go:344: "test-local-path" [f8a6be90-365d-4bdf-8485-681b1bd86dae] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f8a6be90-365d-4bdf-8485-681b1bd86dae] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f8a6be90-365d-4bdf-8485-681b1bd86dae] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.016782835s
addons_test.go:890: (dbg) Run:  kubectl --context addons-051402 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 ssh "cat /opt/local-path-provisioner/pvc-0362ea53-f48b-4b00-a706-8794fc49b539_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-051402 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-051402 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-051402 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-051402 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.546223804s)
--- PASS: TestAddons/parallel/LocalPath (55.24s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bkl6h" [4f66868a-557b-4334-a2fb-fe5b513c026b] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0375503s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-051402
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.71s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-051402 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-051402 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.41s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-051402
addons_test.go:171: (dbg) Done: out/minikube-linux-amd64 stop -p addons-051402: (13.105495876s)
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-051402
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-051402
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-051402
--- PASS: TestAddons/StoppedEnableDisable (13.41s)

                                                
                                    
x
+
TestCertOptions (85.81s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-663778 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-663778 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m23.995295027s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-663778 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-663778 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-663778 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-663778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-663778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-663778: (1.2727597s)
--- PASS: TestCertOptions (85.81s)

                                                
                                    
x
+
TestCertExpiration (290.16s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-820190 --memory=2048 --cert-expiration=3m --driver=kvm2 
E1117 16:41:19.223420   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-820190 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m10.678688973s)
E1117 16:42:32.145864   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-820190 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E1117 16:45:20.420957   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:45:35.192948   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:45:47.396508   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:45:48.103807   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-820190 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (38.26497606s)
helpers_test.go:175: Cleaning up "cert-expiration-820190" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-820190
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-820190: (1.214019207s)
--- PASS: TestCertExpiration (290.16s)

                                                
                                    
x
+
TestDockerFlags (59.25s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-936922 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-936922 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (57.365625025s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-936922 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-936922 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-936922" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-936922
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-936922: (1.278849799s)
--- PASS: TestDockerFlags (59.25s)

                                                
                                    
x
+
TestForceSystemdFlag (52.59s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-966772 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-966772 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (51.362600787s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-966772 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-966772" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-966772
--- PASS: TestForceSystemdFlag (52.59s)

                                                
                                    
x
+
TestForceSystemdEnv (85.94s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-719684 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
E1117 16:43:04.263444   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-719684 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m24.597123142s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-719684 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-719684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-719684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-719684: (1.082345542s)
--- PASS: TestForceSystemdEnv (85.94s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.88s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E1117 16:41:01.382442   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (4.88s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status: exit status 6 (241.688513ms)

                                                
                                                
-- stdout --
	nospam-725106
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:02:52.693604   19774 status.go:415] kubeconfig endpoint: extract IP: "nospam-725106" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status" failed: exit status 6
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status: exit status 6 (231.23772ms)

                                                
                                                
-- stdout --
	nospam-725106
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:02:52.929385   19804 status.go:415] kubeconfig endpoint: extract IP: "nospam-725106" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status" failed: exit status 6
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status: exit status 6 (228.206957ms)

                                                
                                                
-- stdout --
	nospam-725106
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Misconfigured
	
	
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1117 16:02:53.158001   19834 status.go:415] kubeconfig endpoint: extract IP: "nospam-725106" does not appear in /home/jenkins/minikube-integration/17634-9353/kubeconfig

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 status" failed: exit status 6
--- PASS: TestErrorSpam/status (0.70s)

                                                
                                    
x
+
TestErrorSpam/pause (4.51s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause: exit status 80 (1.418614145s)

                                                
                                                
-- stdout --
	* Pausing node nospam-725106 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause: exit status 80 (1.48211458s)

                                                
                                                
-- stdout --
	* Pausing node nospam-725106 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause: exit status 80 (1.60838499s)

                                                
                                                
-- stdout --
	* Pausing node nospam-725106 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file kubelet.service does not exist.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (4.51s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.77s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause: exit status 80 (2.272564526s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-725106 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause" failed: exit status 80
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause
error_spam_test.go:159: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause: exit status 80 (2.119757986s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-725106 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:161: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause" failed: exit status 80
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause
error_spam_test.go:182: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause: exit status 80 (1.375461049s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-725106 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: kubelet start: sudo systemctl start kubelet: Process exited with status 5
	stdout:
	
	stderr:
	Failed to start kubelet.service: Unit kubelet.service not found.
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:184: "out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.77s)

                                                
                                    
x
+
TestErrorSpam/stop (106.97s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 stop: (1m46.802771336s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-725106 --log_dir /tmp/nospam-725106 stop
--- PASS: TestErrorSpam/stop (106.97s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17634-9353/.minikube/files/etc/test/nested/copy/16558/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.47s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074045 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
E1117 16:05:47.396305   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:47.401989   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:47.412316   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:47.432610   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:47.472945   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:47.553281   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:47.713708   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:48.034348   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:48.675326   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:49.955885   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:05:52.516240   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-074045 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m5.47183253s)
--- PASS: TestFunctional/serial/StartWithProxy (65.47s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.07s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074045 --alsologtostderr -v=8
E1117 16:05:57.636781   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:06:07.877447   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:06:28.357921   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-074045 --alsologtostderr -v=8: (39.072329559s)
functional_test.go:659: soft start took 39.073019285s for "functional-074045" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.07s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-074045 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 cache add registry.k8s.io/pause:3.1: (1.246372396s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 cache add registry.k8s.io/pause:3.3: (1.329960922s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 cache add registry.k8s.io/pause:latest: (1.213717526s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-074045 /tmp/TestFunctionalserialCacheCmdcacheadd_local2527551443/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 cache add minikube-local-cache-test:functional-074045
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 cache add minikube-local-cache-test:functional-074045: (1.392516326s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 cache delete minikube-local-cache-test:functional-074045
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-074045
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (248.88326ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 kubectl -- --context functional-074045 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-074045 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.9s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074045 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1117 16:07:09.318268   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-074045 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.902720219s)
functional_test.go:757: restart took 41.902826971s for "functional-074045" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (41.90s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-074045 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 logs: (1.101300278s)
--- PASS: TestFunctional/serial/LogsCmd (1.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 logs --file /tmp/TestFunctionalserialLogsFileCmd2292576432/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 logs --file /tmp/TestFunctionalserialLogsFileCmd2292576432/001/logs.txt: (1.159672576s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.31s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-074045 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-074045
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-074045: exit status 115 (294.06424ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.223:31892 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-074045 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.31s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 config get cpus: exit status 14 (86.460321ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 config get cpus: exit status 14 (67.024795ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (26.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074045 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-074045 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 23606: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (26.67s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-074045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (143.711735ms)

                                                
                                                
-- stdout --
	* [functional-074045] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:07:54.916702   23477 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:07:54.916962   23477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:07:54.916975   23477 out.go:309] Setting ErrFile to fd 2...
	I1117 16:07:54.916981   23477 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:07:54.917142   23477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	I1117 16:07:54.917701   23477 out.go:303] Setting JSON to false
	I1117 16:07:54.918622   23477 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3021,"bootTime":1700234254,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 16:07:54.918677   23477 start.go:138] virtualization: kvm guest
	I1117 16:07:54.920834   23477 out.go:177] * [functional-074045] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1117 16:07:54.922332   23477 out.go:177]   - MINIKUBE_LOCATION=17634
	I1117 16:07:54.922330   23477 notify.go:220] Checking for updates...
	I1117 16:07:54.923786   23477 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 16:07:54.925180   23477 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	I1117 16:07:54.926597   23477 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	I1117 16:07:54.927888   23477 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 16:07:54.929242   23477 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1117 16:07:54.930930   23477 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:07:54.931391   23477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:07:54.931446   23477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:07:54.945364   23477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35149
	I1117 16:07:54.945785   23477 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:07:54.946390   23477 main.go:141] libmachine: Using API Version  1
	I1117 16:07:54.946416   23477 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:07:54.946715   23477 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:07:54.946859   23477 main.go:141] libmachine: (functional-074045) Calling .DriverName
	I1117 16:07:54.947113   23477 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 16:07:54.947402   23477 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:07:54.947439   23477 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:07:54.961665   23477 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40067
	I1117 16:07:54.962088   23477 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:07:54.962617   23477 main.go:141] libmachine: Using API Version  1
	I1117 16:07:54.962638   23477 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:07:54.962940   23477 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:07:54.963192   23477 main.go:141] libmachine: (functional-074045) Calling .DriverName
	I1117 16:07:54.994917   23477 out.go:177] * Using the kvm2 driver based on existing profile
	I1117 16:07:54.996090   23477 start.go:298] selected driver: kvm2
	I1117 16:07:54.996103   23477 start.go:902] validating driver "kvm2" against &{Name:functional-074045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-074045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.223 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 16:07:54.996218   23477 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1117 16:07:54.998244   23477 out.go:177] 
	W1117 16:07:54.999544   23477 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1117 16:07:55.000835   23477 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074045 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-074045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-074045 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (158.233414ms)

                                                
                                                
-- stdout --
	* [functional-074045] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:07:55.209433   23531 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:07:55.209614   23531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:07:55.209627   23531 out.go:309] Setting ErrFile to fd 2...
	I1117 16:07:55.209635   23531 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:07:55.210068   23531 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	I1117 16:07:55.210828   23531 out.go:303] Setting JSON to false
	I1117 16:07:55.212058   23531 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3021,"bootTime":1700234254,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1117 16:07:55.212140   23531 start.go:138] virtualization: kvm guest
	I1117 16:07:55.214343   23531 out.go:177] * [functional-074045] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1117 16:07:55.215880   23531 out.go:177]   - MINIKUBE_LOCATION=17634
	I1117 16:07:55.215897   23531 notify.go:220] Checking for updates...
	I1117 16:07:55.217317   23531 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1117 16:07:55.219491   23531 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	I1117 16:07:55.221046   23531 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	I1117 16:07:55.222545   23531 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1117 16:07:55.224048   23531 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1117 16:07:55.226620   23531 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:07:55.226989   23531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:07:55.227043   23531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:07:55.243663   23531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41733
	I1117 16:07:55.244094   23531 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:07:55.244728   23531 main.go:141] libmachine: Using API Version  1
	I1117 16:07:55.244760   23531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:07:55.245113   23531 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:07:55.245338   23531 main.go:141] libmachine: (functional-074045) Calling .DriverName
	I1117 16:07:55.245618   23531 driver.go:378] Setting default libvirt URI to qemu:///system
	I1117 16:07:55.245964   23531 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:07:55.246010   23531 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:07:55.261680   23531 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39901
	I1117 16:07:55.262310   23531 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:07:55.262791   23531 main.go:141] libmachine: Using API Version  1
	I1117 16:07:55.262819   23531 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:07:55.263143   23531 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:07:55.263348   23531 main.go:141] libmachine: (functional-074045) Calling .DriverName
	I1117 16:07:55.296942   23531 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1117 16:07:55.298457   23531 start.go:298] selected driver: kvm2
	I1117 16:07:55.298478   23531 start.go:902] validating driver "kvm2" against &{Name:functional-074045 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.3 ClusterName:functional-074045 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.223 Port:8441 KubernetesVersion:v1.28.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1117 16:07:55.298622   23531 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1117 16:07:55.301165   23531 out.go:177] 
	W1117 16:07:55.302486   23531 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1117 16:07:55.303781   23531 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-074045 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-074045 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-98tz2" [b184026c-c73a-467f-bc1b-30765b7a0449] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-98tz2" [b184026c-c73a-467f-bc1b-30765b7a0449] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.026391769s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.223:30472
functional_test.go:1674: http://192.168.39.223:30472: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-98tz2

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.223:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.223:30472
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (56.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0a9be300-1021-414e-8f34-ef13929106f3] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.014766105s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-074045 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-074045 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-074045 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-074045 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-074045 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ba74082c-0166-4fa2-8b10-7cad97d809d4] Pending
helpers_test.go:344: "sp-pod" [ba74082c-0166-4fa2-8b10-7cad97d809d4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ba74082c-0166-4fa2-8b10-7cad97d809d4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.032372293s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-074045 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-074045 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-074045 delete -f testdata/storage-provisioner/pod.yaml: (2.837727708s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-074045 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cd3c452b-9ece-49a4-8e2c-8b616e7298e4] Pending
helpers_test.go:344: "sp-pod" [cd3c452b-9ece-49a4-8e2c-8b616e7298e4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [cd3c452b-9ece-49a4-8e2c-8b616e7298e4] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 29.013777678s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-074045 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (56.80s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh -n functional-074045 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 cp functional-074045:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd471082309/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh -n functional-074045 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-074045 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-jgl9d" [17d9245e-4b9d-462d-a321-930d00178957] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-jgl9d" [17d9245e-4b9d-462d-a321-930d00178957] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 27.020755214s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-074045 exec mysql-859648c796-jgl9d -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-074045 exec mysql-859648c796-jgl9d -- mysql -ppassword -e "show databases;": exit status 1 (343.533074ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-074045 exec mysql-859648c796-jgl9d -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-074045 exec mysql-859648c796-jgl9d -- mysql -ppassword -e "show databases;": exit status 1 (303.553957ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-074045 exec mysql-859648c796-jgl9d -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-074045 exec mysql-859648c796-jgl9d -- mysql -ppassword -e "show databases;": exit status 1 (239.18245ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-074045 exec mysql-859648c796-jgl9d -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.28s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16558/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo cat /etc/test/nested/copy/16558/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16558.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo cat /etc/ssl/certs/16558.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16558.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo cat /usr/share/ca-certificates/16558.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/165582.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo cat /etc/ssl/certs/165582.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/165582.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo cat /usr/share/ca-certificates/165582.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-074045 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 ssh "sudo systemctl is-active crio": exit status 1 (269.887134ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-074045 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-074045 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-npfr5" [6a5accec-b8bd-491c-9659-70fe0485279f] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-npfr5" [6a5accec-b8bd-491c-9659-70fe0485279f] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.038813231s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.30s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074045 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.3
registry.k8s.io/kube-proxy:v1.28.3
registry.k8s.io/kube-controller-manager:v1.28.3
registry.k8s.io/kube-apiserver:v1.28.3
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-074045
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-074045
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074045 image ls --format short --alsologtostderr:
I1117 16:08:19.662278   24270 out.go:296] Setting OutFile to fd 1 ...
I1117 16:08:19.662398   24270 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:19.662421   24270 out.go:309] Setting ErrFile to fd 2...
I1117 16:08:19.662426   24270 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:19.662662   24270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
I1117 16:08:19.663291   24270 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:19.663407   24270 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:19.663801   24270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:19.663856   24270 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:19.679267   24270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43683
I1117 16:08:19.679764   24270 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:19.680370   24270 main.go:141] libmachine: Using API Version  1
I1117 16:08:19.680390   24270 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:19.680705   24270 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:19.680876   24270 main.go:141] libmachine: (functional-074045) Calling .GetState
I1117 16:08:19.682991   24270 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:19.683043   24270 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:19.697986   24270 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34171
I1117 16:08:19.698382   24270 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:19.698852   24270 main.go:141] libmachine: Using API Version  1
I1117 16:08:19.698867   24270 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:19.699240   24270 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:19.699439   24270 main.go:141] libmachine: (functional-074045) Calling .DriverName
I1117 16:08:19.699629   24270 ssh_runner.go:195] Run: systemctl --version
I1117 16:08:19.699666   24270 main.go:141] libmachine: (functional-074045) Calling .GetSSHHostname
I1117 16:08:19.702721   24270 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:19.703048   24270 main.go:141] libmachine: (functional-074045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f6:7e", ip: ""} in network mk-functional-074045: {Iface:virbr1 ExpiryTime:2023-11-17 17:05:06 +0000 UTC Type:0 Mac:52:54:00:b8:f6:7e Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-074045 Clientid:01:52:54:00:b8:f6:7e}
I1117 16:08:19.703080   24270 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined IP address 192.168.39.223 and MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:19.703243   24270 main.go:141] libmachine: (functional-074045) Calling .GetSSHPort
I1117 16:08:19.703414   24270 main.go:141] libmachine: (functional-074045) Calling .GetSSHKeyPath
I1117 16:08:19.703594   24270 main.go:141] libmachine: (functional-074045) Calling .GetSSHUsername
I1117 16:08:19.703747   24270 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/functional-074045/id_rsa Username:docker}
I1117 16:08:19.843940   24270 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1117 16:08:19.875558   24270 main.go:141] libmachine: Making call to close driver server
I1117 16:08:19.875573   24270 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:19.875911   24270 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:19.875942   24270 main.go:141] libmachine: (functional-074045) DBG | Closing plugin on server side
I1117 16:08:19.875959   24270 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:08:19.875978   24270 main.go:141] libmachine: Making call to close driver server
I1117 16:08:19.876046   24270 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:19.876283   24270 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:19.876298   24270 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:08:19.876319   24270 main.go:141] libmachine: (functional-074045) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074045 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-074045 | 9b7321f7825a3 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.28.3           | 10baa1ca17068 | 122MB  |
| registry.k8s.io/kube-proxy                  | v1.28.3           | bfc896cf80fba | 73.1MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/kube-scheduler              | v1.28.3           | 6d1b4fd1b182d | 60.1MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/mysql                     | 5.7               | bdba757bc9336 | 501MB  |
| registry.k8s.io/etcd                        | 3.5.9-0           | 73deb9a3f7025 | 294MB  |
| registry.k8s.io/coredns/coredns             | v1.10.1           | ead0a4a53df89 | 53.6MB |
| gcr.io/google-containers/addon-resizer      | functional-074045 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | c20060033e06f | 187MB  |
| registry.k8s.io/kube-apiserver              | v1.28.3           | 5374347291230 | 126MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074045 image ls --format table --alsologtostderr:
I1117 16:08:22.269406   24409 out.go:296] Setting OutFile to fd 1 ...
I1117 16:08:22.269524   24409 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:22.269533   24409 out.go:309] Setting ErrFile to fd 2...
I1117 16:08:22.269538   24409 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:22.269725   24409 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
I1117 16:08:22.270314   24409 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:22.270409   24409 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:22.270771   24409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:22.270811   24409 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:22.284922   24409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45395
I1117 16:08:22.285314   24409 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:22.285916   24409 main.go:141] libmachine: Using API Version  1
I1117 16:08:22.285942   24409 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:22.286359   24409 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:22.286564   24409 main.go:141] libmachine: (functional-074045) Calling .GetState
I1117 16:08:22.288354   24409 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:22.288390   24409 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:22.302287   24409 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45963
I1117 16:08:22.302666   24409 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:22.303157   24409 main.go:141] libmachine: Using API Version  1
I1117 16:08:22.303189   24409 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:22.303452   24409 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:22.303631   24409 main.go:141] libmachine: (functional-074045) Calling .DriverName
I1117 16:08:22.303847   24409 ssh_runner.go:195] Run: systemctl --version
I1117 16:08:22.303875   24409 main.go:141] libmachine: (functional-074045) Calling .GetSSHHostname
I1117 16:08:22.306823   24409 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:22.307188   24409 main.go:141] libmachine: (functional-074045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f6:7e", ip: ""} in network mk-functional-074045: {Iface:virbr1 ExpiryTime:2023-11-17 17:05:06 +0000 UTC Type:0 Mac:52:54:00:b8:f6:7e Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-074045 Clientid:01:52:54:00:b8:f6:7e}
I1117 16:08:22.307226   24409 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined IP address 192.168.39.223 and MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:22.307404   24409 main.go:141] libmachine: (functional-074045) Calling .GetSSHPort
I1117 16:08:22.307583   24409 main.go:141] libmachine: (functional-074045) Calling .GetSSHKeyPath
I1117 16:08:22.307724   24409 main.go:141] libmachine: (functional-074045) Calling .GetSSHUsername
I1117 16:08:22.307839   24409 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/functional-074045/id_rsa Username:docker}
I1117 16:08:22.396725   24409 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1117 16:08:22.552808   24409 main.go:141] libmachine: Making call to close driver server
I1117 16:08:22.552829   24409 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:22.553075   24409 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:22.553101   24409 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:08:22.553111   24409 main.go:141] libmachine: Making call to close driver server
I1117 16:08:22.553119   24409 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:22.553365   24409 main.go:141] libmachine: (functional-074045) DBG | Closing plugin on server side
I1117 16:08:22.553405   24409 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:22.553421   24409 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074045 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-074045"],"size":"32900000"},{"id":"c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"187000000"},{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53600000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c718
7d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"9b7321f7825a3fbbc2677962ae87ccde4956b7bc430293fdc8456a945e72d92b","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-074045"],"size":"30"},{"id":"bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.28.3"],"size":"73100000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.3"],"size":"126000000"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":[],"repoTags"
:["registry.k8s.io/etcd:3.5.9-0"],"size":"294000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.3"],"size":"122000000"},{"id":"6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.3"],"size":"60100000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074045 image ls --format json --alsologtostderr:
I1117 16:08:22.035051   24385 out.go:296] Setting OutFile to fd 1 ...
I1117 16:08:22.035332   24385 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:22.035342   24385 out.go:309] Setting ErrFile to fd 2...
I1117 16:08:22.035349   24385 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:22.035559   24385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
I1117 16:08:22.036120   24385 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:22.036239   24385 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:22.036661   24385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:22.036712   24385 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:22.051143   24385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38485
I1117 16:08:22.051542   24385 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:22.052057   24385 main.go:141] libmachine: Using API Version  1
I1117 16:08:22.052085   24385 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:22.052492   24385 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:22.052683   24385 main.go:141] libmachine: (functional-074045) Calling .GetState
I1117 16:08:22.054639   24385 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:22.054673   24385 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:22.068922   24385 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44105
I1117 16:08:22.069304   24385 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:22.069772   24385 main.go:141] libmachine: Using API Version  1
I1117 16:08:22.069800   24385 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:22.070222   24385 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:22.070455   24385 main.go:141] libmachine: (functional-074045) Calling .DriverName
I1117 16:08:22.070680   24385 ssh_runner.go:195] Run: systemctl --version
I1117 16:08:22.070703   24385 main.go:141] libmachine: (functional-074045) Calling .GetSSHHostname
I1117 16:08:22.073512   24385 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:22.073947   24385 main.go:141] libmachine: (functional-074045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f6:7e", ip: ""} in network mk-functional-074045: {Iface:virbr1 ExpiryTime:2023-11-17 17:05:06 +0000 UTC Type:0 Mac:52:54:00:b8:f6:7e Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-074045 Clientid:01:52:54:00:b8:f6:7e}
I1117 16:08:22.073992   24385 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined IP address 192.168.39.223 and MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:22.074125   24385 main.go:141] libmachine: (functional-074045) Calling .GetSSHPort
I1117 16:08:22.074289   24385 main.go:141] libmachine: (functional-074045) Calling .GetSSHKeyPath
I1117 16:08:22.074422   24385 main.go:141] libmachine: (functional-074045) Calling .GetSSHUsername
I1117 16:08:22.074544   24385 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/functional-074045/id_rsa Username:docker}
I1117 16:08:22.164074   24385 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1117 16:08:22.208023   24385 main.go:141] libmachine: Making call to close driver server
I1117 16:08:22.208041   24385 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:22.208336   24385 main.go:141] libmachine: (functional-074045) DBG | Closing plugin on server side
I1117 16:08:22.208357   24385 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:22.208373   24385 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:08:22.208384   24385 main.go:141] libmachine: Making call to close driver server
I1117 16:08:22.208394   24385 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:22.208630   24385 main.go:141] libmachine: (functional-074045) DBG | Closing plugin on server side
I1117 16:08:22.208638   24385 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:22.208651   24385 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074045 image ls --format yaml --alsologtostderr:
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "294000000"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53600000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: c20060033e06f882b0fbe2db7d974d72e0887a3be5e554efdb0dcf8d53512647
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "187000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-074045
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 9b7321f7825a3fbbc2677962ae87ccde4956b7bc430293fdc8456a945e72d92b
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-074045
size: "30"
- id: 10baa1ca17068a5cc50b0df9d18abc50cbc239d9c2cd7f3355ce35645d49f3d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.3
size: "122000000"
- id: 6d1b4fd1b182d88b748bec936b00b2ff9d549eebcbc7d26df5043b79974277c4
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.3
size: "60100000"
- id: bfc896cf80fba4806aaccd043f61c3663b723687ad9f3b4f5057b98c46fcefdf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.28.3
size: "73100000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 53743472912306d2d17cab3f458cc57f2012f89ed0e9372a2d2b1fa1b20a8076
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.3
size: "126000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074045 image ls --format yaml --alsologtostderr:
I1117 16:08:19.961620   24293 out.go:296] Setting OutFile to fd 1 ...
I1117 16:08:19.961902   24293 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:19.961913   24293 out.go:309] Setting ErrFile to fd 2...
I1117 16:08:19.961920   24293 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:19.962184   24293 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
I1117 16:08:19.962782   24293 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:19.962902   24293 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:19.963328   24293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:19.963379   24293 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:19.977407   24293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41081
I1117 16:08:19.977873   24293 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:19.978435   24293 main.go:141] libmachine: Using API Version  1
I1117 16:08:19.978458   24293 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:19.978791   24293 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:19.978973   24293 main.go:141] libmachine: (functional-074045) Calling .GetState
I1117 16:08:19.980701   24293 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:19.980739   24293 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:19.994642   24293 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41481
I1117 16:08:19.995099   24293 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:19.995658   24293 main.go:141] libmachine: Using API Version  1
I1117 16:08:19.995684   24293 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:19.995991   24293 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:19.996168   24293 main.go:141] libmachine: (functional-074045) Calling .DriverName
I1117 16:08:19.996392   24293 ssh_runner.go:195] Run: systemctl --version
I1117 16:08:19.996415   24293 main.go:141] libmachine: (functional-074045) Calling .GetSSHHostname
I1117 16:08:19.999092   24293 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:19.999539   24293 main.go:141] libmachine: (functional-074045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f6:7e", ip: ""} in network mk-functional-074045: {Iface:virbr1 ExpiryTime:2023-11-17 17:05:06 +0000 UTC Type:0 Mac:52:54:00:b8:f6:7e Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-074045 Clientid:01:52:54:00:b8:f6:7e}
I1117 16:08:19.999591   24293 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined IP address 192.168.39.223 and MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:19.999780   24293 main.go:141] libmachine: (functional-074045) Calling .GetSSHPort
I1117 16:08:19.999980   24293 main.go:141] libmachine: (functional-074045) Calling .GetSSHKeyPath
I1117 16:08:20.000178   24293 main.go:141] libmachine: (functional-074045) Calling .GetSSHUsername
I1117 16:08:20.000319   24293 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/functional-074045/id_rsa Username:docker}
I1117 16:08:20.088947   24293 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I1117 16:08:20.117379   24293 main.go:141] libmachine: Making call to close driver server
I1117 16:08:20.117392   24293 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:20.117697   24293 main.go:141] libmachine: (functional-074045) DBG | Closing plugin on server side
I1117 16:08:20.117749   24293 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:20.117765   24293 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:08:20.117776   24293 main.go:141] libmachine: Making call to close driver server
I1117 16:08:20.117785   24293 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:20.118057   24293 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:20.118070   24293 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:08:20.118165   24293 main.go:141] libmachine: (functional-074045) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 ssh pgrep buildkitd: exit status 1 (227.382937ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image build -t localhost/my-image:functional-074045 testdata/build --alsologtostderr
2023/11/17 16:08:21 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 image build -t localhost/my-image:functional-074045 testdata/build --alsologtostderr: (3.302687486s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-074045 image build -t localhost/my-image:functional-074045 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 5fbe05b0f018
Removing intermediate container 5fbe05b0f018
---> 81cb3869237b
Step 3/3 : ADD content.txt /
---> 5001c51fad1f
Successfully built 5001c51fad1f
Successfully tagged localhost/my-image:functional-074045
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-074045 image build -t localhost/my-image:functional-074045 testdata/build --alsologtostderr:
I1117 16:08:20.561084   24349 out.go:296] Setting OutFile to fd 1 ...
I1117 16:08:20.561369   24349 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:20.561378   24349 out.go:309] Setting ErrFile to fd 2...
I1117 16:08:20.561383   24349 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1117 16:08:20.561548   24349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
I1117 16:08:20.562093   24349 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:20.562618   24349 config.go:182] Loaded profile config "functional-074045": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
I1117 16:08:20.563074   24349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:20.563116   24349 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:20.577262   24349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37509
I1117 16:08:20.577789   24349 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:20.578438   24349 main.go:141] libmachine: Using API Version  1
I1117 16:08:20.578464   24349 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:20.578803   24349 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:20.578997   24349 main.go:141] libmachine: (functional-074045) Calling .GetState
I1117 16:08:20.580952   24349 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I1117 16:08:20.580996   24349 main.go:141] libmachine: Launching plugin server for driver kvm2
I1117 16:08:20.594854   24349 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41533
I1117 16:08:20.595190   24349 main.go:141] libmachine: () Calling .GetVersion
I1117 16:08:20.595659   24349 main.go:141] libmachine: Using API Version  1
I1117 16:08:20.595681   24349 main.go:141] libmachine: () Calling .SetConfigRaw
I1117 16:08:20.596004   24349 main.go:141] libmachine: () Calling .GetMachineName
I1117 16:08:20.596191   24349 main.go:141] libmachine: (functional-074045) Calling .DriverName
I1117 16:08:20.596414   24349 ssh_runner.go:195] Run: systemctl --version
I1117 16:08:20.596445   24349 main.go:141] libmachine: (functional-074045) Calling .GetSSHHostname
I1117 16:08:20.599300   24349 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:20.599735   24349 main.go:141] libmachine: (functional-074045) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:f6:7e", ip: ""} in network mk-functional-074045: {Iface:virbr1 ExpiryTime:2023-11-17 17:05:06 +0000 UTC Type:0 Mac:52:54:00:b8:f6:7e Iaid: IPaddr:192.168.39.223 Prefix:24 Hostname:functional-074045 Clientid:01:52:54:00:b8:f6:7e}
I1117 16:08:20.599772   24349 main.go:141] libmachine: (functional-074045) DBG | domain functional-074045 has defined IP address 192.168.39.223 and MAC address 52:54:00:b8:f6:7e in network mk-functional-074045
I1117 16:08:20.599882   24349 main.go:141] libmachine: (functional-074045) Calling .GetSSHPort
I1117 16:08:20.600032   24349 main.go:141] libmachine: (functional-074045) Calling .GetSSHKeyPath
I1117 16:08:20.600159   24349 main.go:141] libmachine: (functional-074045) Calling .GetSSHUsername
I1117 16:08:20.600307   24349 sshutil.go:53] new ssh client: &{IP:192.168.39.223 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/functional-074045/id_rsa Username:docker}
I1117 16:08:20.691888   24349 build_images.go:151] Building image from path: /tmp/build.1628025104.tar
I1117 16:08:20.691969   24349 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1117 16:08:20.710021   24349 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1628025104.tar
I1117 16:08:20.714277   24349 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1628025104.tar: stat -c "%s %y" /var/lib/minikube/build/build.1628025104.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1628025104.tar': No such file or directory
I1117 16:08:20.714314   24349 ssh_runner.go:362] scp /tmp/build.1628025104.tar --> /var/lib/minikube/build/build.1628025104.tar (3072 bytes)
I1117 16:08:20.739354   24349 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1628025104
I1117 16:08:20.747839   24349 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1628025104 -xf /var/lib/minikube/build/build.1628025104.tar
I1117 16:08:20.756174   24349 docker.go:346] Building image: /var/lib/minikube/build/build.1628025104
I1117 16:08:20.756239   24349 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-074045 /var/lib/minikube/build/build.1628025104
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I1117 16:08:23.784572   24349 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-074045 /var/lib/minikube/build/build.1628025104: (3.028303383s)
I1117 16:08:23.784645   24349 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1628025104
I1117 16:08:23.794842   24349 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1628025104.tar
I1117 16:08:23.805907   24349 build_images.go:207] Built localhost/my-image:functional-074045 from /tmp/build.1628025104.tar
I1117 16:08:23.805935   24349 build_images.go:123] succeeded building to: functional-074045
I1117 16:08:23.805940   24349 build_images.go:124] failed building to: 
I1117 16:08:23.805995   24349 main.go:141] libmachine: Making call to close driver server
I1117 16:08:23.806004   24349 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:23.806320   24349 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:23.806343   24349 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:08:23.806357   24349 main.go:141] libmachine: Making call to close driver server
I1117 16:08:23.806369   24349 main.go:141] libmachine: (functional-074045) Calling .Close
I1117 16:08:23.806592   24349 main.go:141] libmachine: Successfully made call to close driver server
I1117 16:08:23.806606   24349 main.go:141] libmachine: Making call to close connection to plugin binary
I1117 16:08:23.806615   24349 main.go:141] libmachine: (functional-074045) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.032484063s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-074045
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-074045 docker-env) && out/minikube-linux-amd64 status -p functional-074045"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-074045 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image load --daemon gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 image load --daemon gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr: (4.115108724s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image load --daemon gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 image load --daemon gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr: (2.203130796s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.861343037s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-074045
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image load --daemon gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 image load --daemon gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr: (4.28117077s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 service list -o json
functional_test.go:1493: Took "311.135295ms" to run "out/minikube-linux-amd64 -p functional-074045 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.223:32522
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.223:32522
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "290.846748ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "81.628139ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (27.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdany-port3248421408/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1700237267883770847" to /tmp/TestFunctionalparallelMountCmdany-port3248421408/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1700237267883770847" to /tmp/TestFunctionalparallelMountCmdany-port3248421408/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1700237267883770847" to /tmp/TestFunctionalparallelMountCmdany-port3248421408/001/test-1700237267883770847
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.556137ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 17 16:07 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 17 16:07 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 17 16:07 test-1700237267883770847
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh cat /mount-9p/test-1700237267883770847
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-074045 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4008efd8-1e5e-48c2-b646-189d318310f4] Pending
helpers_test.go:344: "busybox-mount" [4008efd8-1e5e-48c2-b646-189d318310f4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4008efd8-1e5e-48c2-b646-189d318310f4] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4008efd8-1e5e-48c2-b646-189d318310f4] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 24.012127044s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-074045 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdany-port3248421408/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (27.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "346.822706ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "58.097762ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image save gcr.io/google-containers/addon-resizer:functional-074045 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 image save gcr.io/google-containers/addon-resizer:functional-074045 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.99990519s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image rm gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.375358023s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-074045
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 image save --daemon gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-074045 image save --daemon gcr.io/google-containers/addon-resizer:functional-074045 --alsologtostderr: (2.057171315s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-074045
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdspecific-port2988298155/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (214.304858ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdspecific-port2988298155/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 ssh "sudo umount -f /mount-9p": exit status 1 (243.535161ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-074045 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdspecific-port2988298155/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4237618220/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4237618220/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4237618220/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T" /mount1: exit status 1 (310.437179ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-074045 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-074045 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4237618220/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4237618220/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-074045 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4237618220/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.68s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-074045
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-074045
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-074045
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (377.27s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-961990 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1117 16:35:47.396285   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:36:19.224859   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-961990 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (2m37.078748152s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-961990 cache add gcr.io/k8s-minikube/gvisor-addon:2
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-961990 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.076955374s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-961990 addons enable gvisor
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-961990 addons enable gvisor: (4.927326673s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [7f14b085-54c9-4bea-b2e7-b0bcf6b976ad] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.036253496s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-961990 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [2f6d6409-2888-46ca-8dc2-afaf48c0db78] Pending
helpers_test.go:344: "nginx-gvisor" [2f6d6409-2888-46ca-8dc2-afaf48c0db78] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1117 16:38:50.440721   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
helpers_test.go:344: "nginx-gvisor" [2f6d6409-2888-46ca-8dc2-afaf48c0db78] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 15.02967537s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-961990
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-961990: (1m31.902670835s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-961990 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E1117 16:40:30.661402   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:40.901582   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:47.396036   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-961990 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m8.837971682s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [7f14b085-54c9-4bea-b2e7-b0bcf6b976ad] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [7f14b085-54c9-4bea-b2e7-b0bcf6b976ad] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 5.023073426s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [2f6d6409-2888-46ca-8dc2-afaf48c0db78] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1117 16:41:42.342960   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 5.012050861s
helpers_test.go:175: Cleaning up "gvisor-961990" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-961990
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-961990: (2.000834182s)
--- PASS: TestGvisorAddon (377.27s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.14s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-366517 --driver=kvm2 
E1117 16:08:31.238748   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-366517 --driver=kvm2 : (52.13504883s)
--- PASS: TestImageBuild/serial/Setup (52.14s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-366517
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-366517: (2.326959274s)
--- PASS: TestImageBuild/serial/NormalBuild (2.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-366517
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-366517: (1.404931856s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-366517
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.41s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-366517
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.29s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (92.65s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-533209 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 
E1117 16:10:47.396537   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-533209 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2 : (1m32.650407155s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (92.65s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.44s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533209 addons enable ingress --alsologtostderr -v=5
E1117 16:11:15.079328   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-533209 addons enable ingress --alsologtostderr -v=5: (17.442442163s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (17.44s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533209 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.52s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (47.68s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-533209 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-533209 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (13.69788175s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-533209 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-533209 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [40589fba-1bc4-42fb-8789-fce7be5f84da] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [40589fba-1bc4-42fb-8789-fce7be5f84da] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.02439053s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533209 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-533209 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533209 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.21
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533209 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-533209 addons disable ingress-dns --alsologtostderr -v=1: (13.319964242s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-533209 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-533209 addons disable ingress --alsologtostderr -v=1: (7.498798937s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (47.68s)

                                                
                                    
x
+
TestJSONOutput/start/Command (66.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-068342 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
E1117 16:12:32.146305   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:32.151587   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:32.161846   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:32.182171   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:32.222482   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:32.302811   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:32.463287   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:32.784055   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:33.424379   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:34.704862   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:37.265328   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:42.385879   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:12:52.626814   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:13:13.107664   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-068342 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m6.789551403s)
--- PASS: TestJSONOutput/start/Command (66.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-068342 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-068342 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.41s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-068342 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-068342 --output=json --user=testUser: (7.413522096s)
--- PASS: TestJSONOutput/stop/Command (7.41s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-527258 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-527258 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.96034ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a773d8eb-6a22-46a7-b447-fc33b850b6f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-527258] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"64ec7261-51b0-495d-8f26-9d36e7e901a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17634"}}
	{"specversion":"1.0","id":"0ab206e7-8071-4d47-a702-09666721b302","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e99ee6f3-6ad6-4c33-9561-ce4e78a65d8e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig"}}
	{"specversion":"1.0","id":"55cd8934-a0eb-47e9-944f-ba086bd847ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube"}}
	{"specversion":"1.0","id":"f14bf08a-33c3-44f4-943e-2f41832acba0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a46fd38e-9c12-4d74-aabe-daf84ac03136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"878219af-9269-4013-b4a1-a0842b4bb548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-527258" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-527258
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (105.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-987229 --driver=kvm2 
E1117 16:13:54.068338   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-987229 --driver=kvm2 : (50.468564159s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-990066 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-990066 --driver=kvm2 : (51.932389708s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-987229
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-990066
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-990066" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-990066
helpers_test.go:175: Cleaning up "first-987229" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-987229
--- PASS: TestMinikubeProfile (105.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.32s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-715836 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1117 16:15:15.991247   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-715836 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.31840072s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-715836 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-715836 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (32.39s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-729037 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E1117 16:15:47.396212   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-729037 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (31.389419869s)
--- PASS: TestMountStart/serial/StartWithMountSecond (32.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.46s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729037 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729037 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.46s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.07s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-715836 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-715836 --alsologtostderr -v=5: (1.071218945s)
--- PASS: TestMountStart/serial/DeleteFirst (1.07s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729037 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729037 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.09s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-729037
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-729037: (2.094136059s)
--- PASS: TestMountStart/serial/Stop (2.09s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.92s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-729037
E1117 16:16:19.224000   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:19.229261   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:19.239512   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:19.259812   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:19.300179   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:19.380538   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:19.540967   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:19.861558   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:20.502506   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:21.783011   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:24.343781   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:29.464787   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:16:39.704963   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-729037: (24.923694885s)
--- PASS: TestMountStart/serial/RestartStopped (25.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729037 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-729037 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (180.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930207 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E1117 16:17:00.185314   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:17:32.146234   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:17:41.145529   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:17:59.832056   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:19:03.066573   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-930207 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m59.623975817s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (180.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-930207 -- rollout status deployment/busybox: (3.003120172s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-bhc4q -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-tjm6q -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-bhc4q -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-tjm6q -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-bhc4q -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-tjm6q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.80s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-bhc4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-bhc4q -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-tjm6q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-930207 -- exec busybox-5bc68d56bd-tjm6q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (46.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-930207 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-930207 -v 3 --alsologtostderr: (45.598759273s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (46.18s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp testdata/cp-test.txt multinode-930207:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2396442477/001/cp-test_multinode-930207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207:/home/docker/cp-test.txt multinode-930207-m02:/home/docker/cp-test_multinode-930207_multinode-930207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m02 "sudo cat /home/docker/cp-test_multinode-930207_multinode-930207-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207:/home/docker/cp-test.txt multinode-930207-m03:/home/docker/cp-test_multinode-930207_multinode-930207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m03 "sudo cat /home/docker/cp-test_multinode-930207_multinode-930207-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp testdata/cp-test.txt multinode-930207-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2396442477/001/cp-test_multinode-930207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207-m02:/home/docker/cp-test.txt multinode-930207:/home/docker/cp-test_multinode-930207-m02_multinode-930207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207 "sudo cat /home/docker/cp-test_multinode-930207-m02_multinode-930207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207-m02:/home/docker/cp-test.txt multinode-930207-m03:/home/docker/cp-test_multinode-930207-m02_multinode-930207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m03 "sudo cat /home/docker/cp-test_multinode-930207-m02_multinode-930207-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp testdata/cp-test.txt multinode-930207-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2396442477/001/cp-test_multinode-930207-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207-m03:/home/docker/cp-test.txt multinode-930207:/home/docker/cp-test_multinode-930207-m03_multinode-930207.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207 "sudo cat /home/docker/cp-test_multinode-930207-m03_multinode-930207.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 cp multinode-930207-m03:/home/docker/cp-test.txt multinode-930207-m02:/home/docker/cp-test_multinode-930207-m03_multinode-930207-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 ssh -n multinode-930207-m02 "sudo cat /home/docker/cp-test_multinode-930207-m03_multinode-930207-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-930207 node stop m03: (2.435400209s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-930207 status: exit status 7 (458.447612ms)

                                                
                                                
-- stdout --
	multinode-930207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-930207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-930207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-930207 status --alsologtostderr: exit status 7 (483.722725ms)

                                                
                                                
-- stdout --
	multinode-930207
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-930207-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-930207-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:20:46.922767   31988 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:20:46.922954   31988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:20:46.922969   31988 out.go:309] Setting ErrFile to fd 2...
	I1117 16:20:46.922976   31988 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:20:46.923210   31988 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	I1117 16:20:46.923391   31988 out.go:303] Setting JSON to false
	I1117 16:20:46.923423   31988 mustload.go:65] Loading cluster: multinode-930207
	I1117 16:20:46.923473   31988 notify.go:220] Checking for updates...
	I1117 16:20:46.923961   31988 config.go:182] Loaded profile config "multinode-930207": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:20:46.923981   31988 status.go:255] checking status of multinode-930207 ...
	I1117 16:20:46.924566   31988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:20:46.924619   31988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:20:46.939103   31988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38655
	I1117 16:20:46.939496   31988 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:20:46.940032   31988 main.go:141] libmachine: Using API Version  1
	I1117 16:20:46.940054   31988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:20:46.940399   31988 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:20:46.940610   31988 main.go:141] libmachine: (multinode-930207) Calling .GetState
	I1117 16:20:46.942387   31988 status.go:330] multinode-930207 host status = "Running" (err=<nil>)
	I1117 16:20:46.942405   31988 host.go:66] Checking if "multinode-930207" exists ...
	I1117 16:20:46.942732   31988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:20:46.942768   31988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:20:46.956895   31988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34895
	I1117 16:20:46.957342   31988 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:20:46.957819   31988 main.go:141] libmachine: Using API Version  1
	I1117 16:20:46.957850   31988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:20:46.958204   31988 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:20:46.958376   31988 main.go:141] libmachine: (multinode-930207) Calling .GetIP
	I1117 16:20:46.961320   31988 main.go:141] libmachine: (multinode-930207) DBG | domain multinode-930207 has defined MAC address 52:54:00:73:e0:08 in network mk-multinode-930207
	I1117 16:20:46.962149   31988 main.go:141] libmachine: (multinode-930207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:e0:08", ip: ""} in network mk-multinode-930207: {Iface:virbr1 ExpiryTime:2023-11-17 17:16:59 +0000 UTC Type:0 Mac:52:54:00:73:e0:08 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-930207 Clientid:01:52:54:00:73:e0:08}
	I1117 16:20:46.962213   31988 host.go:66] Checking if "multinode-930207" exists ...
	I1117 16:20:46.962206   31988 main.go:141] libmachine: (multinode-930207) DBG | domain multinode-930207 has defined IP address 192.168.39.108 and MAC address 52:54:00:73:e0:08 in network mk-multinode-930207
	I1117 16:20:46.962597   31988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:20:46.962675   31988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:20:46.980233   31988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44497
	I1117 16:20:46.980666   31988 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:20:46.981116   31988 main.go:141] libmachine: Using API Version  1
	I1117 16:20:46.981143   31988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:20:46.981480   31988 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:20:46.981696   31988 main.go:141] libmachine: (multinode-930207) Calling .DriverName
	I1117 16:20:46.981873   31988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 16:20:46.981896   31988 main.go:141] libmachine: (multinode-930207) Calling .GetSSHHostname
	I1117 16:20:46.984994   31988 main.go:141] libmachine: (multinode-930207) DBG | domain multinode-930207 has defined MAC address 52:54:00:73:e0:08 in network mk-multinode-930207
	I1117 16:20:46.985413   31988 main.go:141] libmachine: (multinode-930207) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:73:e0:08", ip: ""} in network mk-multinode-930207: {Iface:virbr1 ExpiryTime:2023-11-17 17:16:59 +0000 UTC Type:0 Mac:52:54:00:73:e0:08 Iaid: IPaddr:192.168.39.108 Prefix:24 Hostname:multinode-930207 Clientid:01:52:54:00:73:e0:08}
	I1117 16:20:46.985457   31988 main.go:141] libmachine: (multinode-930207) DBG | domain multinode-930207 has defined IP address 192.168.39.108 and MAC address 52:54:00:73:e0:08 in network mk-multinode-930207
	I1117 16:20:46.985577   31988 main.go:141] libmachine: (multinode-930207) Calling .GetSSHPort
	I1117 16:20:46.985742   31988 main.go:141] libmachine: (multinode-930207) Calling .GetSSHKeyPath
	I1117 16:20:46.985874   31988 main.go:141] libmachine: (multinode-930207) Calling .GetSSHUsername
	I1117 16:20:46.985990   31988 sshutil.go:53] new ssh client: &{IP:192.168.39.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/multinode-930207/id_rsa Username:docker}
	I1117 16:20:47.093459   31988 ssh_runner.go:195] Run: systemctl --version
	I1117 16:20:47.100668   31988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:20:47.122061   31988 kubeconfig.go:92] found "multinode-930207" server: "https://192.168.39.108:8443"
	I1117 16:20:47.122119   31988 api_server.go:166] Checking apiserver status ...
	I1117 16:20:47.122160   31988 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1117 16:20:47.138533   31988 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1903/cgroup
	I1117 16:20:47.151950   31988 api_server.go:182] apiserver freezer: "8:freezer:/kubepods/burstable/pod382c1d87eb06659a1e8b37c54816f924/aa8b33aaa6f143032e77545d2f84953f1e03f754fbbf54e783fc747aef790ee3"
	I1117 16:20:47.152023   31988 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod382c1d87eb06659a1e8b37c54816f924/aa8b33aaa6f143032e77545d2f84953f1e03f754fbbf54e783fc747aef790ee3/freezer.state
	I1117 16:20:47.162872   31988 api_server.go:204] freezer state: "THAWED"
	I1117 16:20:47.162905   31988 api_server.go:253] Checking apiserver healthz at https://192.168.39.108:8443/healthz ...
	I1117 16:20:47.167752   31988 api_server.go:279] https://192.168.39.108:8443/healthz returned 200:
	ok
	I1117 16:20:47.167780   31988 status.go:421] multinode-930207 apiserver status = Running (err=<nil>)
	I1117 16:20:47.167791   31988 status.go:257] multinode-930207 status: &{Name:multinode-930207 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:20:47.167813   31988 status.go:255] checking status of multinode-930207-m02 ...
	I1117 16:20:47.168112   31988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:20:47.168152   31988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:20:47.183043   31988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36705
	I1117 16:20:47.183861   31988 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:20:47.184735   31988 main.go:141] libmachine: Using API Version  1
	I1117 16:20:47.184763   31988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:20:47.185125   31988 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:20:47.185325   31988 main.go:141] libmachine: (multinode-930207-m02) Calling .GetState
	I1117 16:20:47.187157   31988 status.go:330] multinode-930207-m02 host status = "Running" (err=<nil>)
	I1117 16:20:47.187188   31988 host.go:66] Checking if "multinode-930207-m02" exists ...
	I1117 16:20:47.187578   31988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:20:47.187627   31988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:20:47.201940   31988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44827
	I1117 16:20:47.202417   31988 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:20:47.202856   31988 main.go:141] libmachine: Using API Version  1
	I1117 16:20:47.202877   31988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:20:47.203154   31988 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:20:47.203311   31988 main.go:141] libmachine: (multinode-930207-m02) Calling .GetIP
	I1117 16:20:47.206388   31988 main.go:141] libmachine: (multinode-930207-m02) DBG | domain multinode-930207-m02 has defined MAC address 52:54:00:78:c3:a6 in network mk-multinode-930207
	I1117 16:20:47.206835   31988 main.go:141] libmachine: (multinode-930207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:a6", ip: ""} in network mk-multinode-930207: {Iface:virbr1 ExpiryTime:2023-11-17 17:18:18 +0000 UTC Type:0 Mac:52:54:00:78:c3:a6 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-930207-m02 Clientid:01:52:54:00:78:c3:a6}
	I1117 16:20:47.206863   31988 main.go:141] libmachine: (multinode-930207-m02) DBG | domain multinode-930207-m02 has defined IP address 192.168.39.164 and MAC address 52:54:00:78:c3:a6 in network mk-multinode-930207
	I1117 16:20:47.207015   31988 host.go:66] Checking if "multinode-930207-m02" exists ...
	I1117 16:20:47.207302   31988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:20:47.207339   31988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:20:47.221149   31988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38321
	I1117 16:20:47.221553   31988 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:20:47.221935   31988 main.go:141] libmachine: Using API Version  1
	I1117 16:20:47.221961   31988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:20:47.222339   31988 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:20:47.222504   31988 main.go:141] libmachine: (multinode-930207-m02) Calling .DriverName
	I1117 16:20:47.222688   31988 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1117 16:20:47.222706   31988 main.go:141] libmachine: (multinode-930207-m02) Calling .GetSSHHostname
	I1117 16:20:47.225220   31988 main.go:141] libmachine: (multinode-930207-m02) DBG | domain multinode-930207-m02 has defined MAC address 52:54:00:78:c3:a6 in network mk-multinode-930207
	I1117 16:20:47.225614   31988 main.go:141] libmachine: (multinode-930207-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:c3:a6", ip: ""} in network mk-multinode-930207: {Iface:virbr1 ExpiryTime:2023-11-17 17:18:18 +0000 UTC Type:0 Mac:52:54:00:78:c3:a6 Iaid: IPaddr:192.168.39.164 Prefix:24 Hostname:multinode-930207-m02 Clientid:01:52:54:00:78:c3:a6}
	I1117 16:20:47.225643   31988 main.go:141] libmachine: (multinode-930207-m02) DBG | domain multinode-930207-m02 has defined IP address 192.168.39.164 and MAC address 52:54:00:78:c3:a6 in network mk-multinode-930207
	I1117 16:20:47.225780   31988 main.go:141] libmachine: (multinode-930207-m02) Calling .GetSSHPort
	I1117 16:20:47.225962   31988 main.go:141] libmachine: (multinode-930207-m02) Calling .GetSSHKeyPath
	I1117 16:20:47.226121   31988 main.go:141] libmachine: (multinode-930207-m02) Calling .GetSSHUsername
	I1117 16:20:47.226260   31988 sshutil.go:53] new ssh client: &{IP:192.168.39.164 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17634-9353/.minikube/machines/multinode-930207-m02/id_rsa Username:docker}
	I1117 16:20:47.316996   31988 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1117 16:20:47.329549   31988 status.go:257] multinode-930207-m02 status: &{Name:multinode-930207-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:20:47.329594   31988 status.go:255] checking status of multinode-930207-m03 ...
	I1117 16:20:47.329904   31988 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:20:47.329943   31988 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:20:47.345607   31988 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44293
	I1117 16:20:47.346039   31988 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:20:47.346492   31988 main.go:141] libmachine: Using API Version  1
	I1117 16:20:47.346515   31988 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:20:47.346829   31988 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:20:47.347024   31988 main.go:141] libmachine: (multinode-930207-m03) Calling .GetState
	I1117 16:20:47.348651   31988 status.go:330] multinode-930207-m03 host status = "Stopped" (err=<nil>)
	I1117 16:20:47.348664   31988 status.go:343] host is not running, skipping remaining checks
	I1117 16:20:47.348669   31988 status.go:257] multinode-930207-m03 status: &{Name:multinode-930207-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (31.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 node start m03 --alsologtostderr
E1117 16:20:47.395844   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-930207 node start m03 --alsologtostderr: (30.490825759s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (31.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (171.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-930207
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-930207
E1117 16:21:19.223660   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
multinode_test.go:290: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-930207: (27.613791067s)
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930207 --wait=true -v=8 --alsologtostderr
E1117 16:21:46.906802   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:22:10.440413   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:22:32.145925   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-930207 --wait=true -v=8 --alsologtostderr: (2m23.34702288s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-930207
--- PASS: TestMultiNode/serial/RestartKeepsNodes (171.08s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-930207 node delete m03: (1.219580229s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 stop
multinode_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p multinode-930207 stop: (25.370301333s)
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-930207 status: exit status 7 (92.190891ms)

                                                
                                                
-- stdout --
	multinode-930207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-930207-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:327: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status --alsologtostderr
multinode_test.go:327: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-930207 status --alsologtostderr: exit status 7 (100.680222ms)

                                                
                                                
-- stdout --
	multinode-930207
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-930207-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1117 16:24:36.882586   33394 out.go:296] Setting OutFile to fd 1 ...
	I1117 16:24:36.882840   33394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:24:36.882850   33394 out.go:309] Setting ErrFile to fd 2...
	I1117 16:24:36.882854   33394 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1117 16:24:36.883038   33394 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17634-9353/.minikube/bin
	I1117 16:24:36.883220   33394 out.go:303] Setting JSON to false
	I1117 16:24:36.883250   33394 mustload.go:65] Loading cluster: multinode-930207
	I1117 16:24:36.883302   33394 notify.go:220] Checking for updates...
	I1117 16:24:36.883780   33394 config.go:182] Loaded profile config "multinode-930207": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.28.3
	I1117 16:24:36.883801   33394 status.go:255] checking status of multinode-930207 ...
	I1117 16:24:36.884215   33394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:24:36.884270   33394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:24:36.903085   33394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38085
	I1117 16:24:36.903590   33394 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:24:36.904169   33394 main.go:141] libmachine: Using API Version  1
	I1117 16:24:36.904186   33394 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:24:36.904617   33394 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:24:36.904843   33394 main.go:141] libmachine: (multinode-930207) Calling .GetState
	I1117 16:24:36.906659   33394 status.go:330] multinode-930207 host status = "Stopped" (err=<nil>)
	I1117 16:24:36.906676   33394 status.go:343] host is not running, skipping remaining checks
	I1117 16:24:36.906683   33394 status.go:257] multinode-930207 status: &{Name:multinode-930207 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1117 16:24:36.906733   33394 status.go:255] checking status of multinode-930207-m02 ...
	I1117 16:24:36.907000   33394 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I1117 16:24:36.907037   33394 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1117 16:24:36.921134   33394 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38221
	I1117 16:24:36.922641   33394 main.go:141] libmachine: () Calling .GetVersion
	I1117 16:24:36.923498   33394 main.go:141] libmachine: Using API Version  1
	I1117 16:24:36.923525   33394 main.go:141] libmachine: () Calling .SetConfigRaw
	I1117 16:24:36.923876   33394 main.go:141] libmachine: () Calling .GetMachineName
	I1117 16:24:36.924135   33394 main.go:141] libmachine: (multinode-930207-m02) Calling .GetState
	I1117 16:24:36.925618   33394 status.go:330] multinode-930207-m02 host status = "Stopped" (err=<nil>)
	I1117 16:24:36.925634   33394 status.go:343] host is not running, skipping remaining checks
	I1117 16:24:36.925640   33394 status.go:257] multinode-930207-m02 status: &{Name:multinode-930207-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.56s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (106.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930207 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E1117 16:25:47.395965   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
E1117 16:26:19.223483   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-930207 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m46.275519335s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-930207 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (106.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-930207
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930207-m02 --driver=kvm2 
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-930207-m02 --driver=kvm2 : exit status 14 (79.296106ms)

                                                
                                                
-- stdout --
	* [multinode-930207-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-930207-m02' is duplicated with machine name 'multinode-930207-m02' in profile 'multinode-930207'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-930207-m03 --driver=kvm2 
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-930207-m03 --driver=kvm2 : (50.084001051s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-930207
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-930207: exit status 80 (237.355673ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-930207
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-930207-m03 already exists in multinode-930207-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-930207-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.26s)

                                                
                                    
x
+
TestPreload (230.2s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-261892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E1117 16:27:32.147314   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
E1117 16:28:55.192269   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-261892 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m14.48180839s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-261892 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-261892 image pull gcr.io/k8s-minikube/busybox: (1.89276011s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-261892
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-261892: (13.116723095s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-261892 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E1117 16:30:47.396305   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-261892 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (1m19.421270838s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-261892 image list
helpers_test.go:175: Cleaning up "test-preload-261892" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-261892
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-261892: (1.069166569s)
--- PASS: TestPreload (230.20s)

                                                
                                    
x
+
TestScheduledStopUnix (121.15s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-270828 --memory=2048 --driver=kvm2 
E1117 16:31:19.224656   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-270828 --memory=2048 --driver=kvm2 : (49.378226993s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270828 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-270828 -n scheduled-stop-270828
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270828 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270828 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270828 -n scheduled-stop-270828
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-270828
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-270828 --schedule 15s
E1117 16:32:32.148092   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1117 16:32:42.267804   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-270828
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-270828: exit status 7 (81.313177ms)

                                                
                                                
-- stdout --
	scheduled-stop-270828
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270828 -n scheduled-stop-270828
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-270828 -n scheduled-stop-270828: exit status 7 (81.371491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-270828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-270828
--- PASS: TestScheduledStopUnix (121.15s)

                                                
                                    
x
+
TestSkaffold (143.53s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe352179943 version
skaffold_test.go:63: skaffold version: v2.9.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-936532 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-936532 --memory=2600 --driver=kvm2 : (52.666686528s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe352179943 run --minikube-profile skaffold-936532 --kube-context skaffold-936532 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe352179943 run --minikube-profile skaffold-936532 --kube-context skaffold-936532 --status-check=true --port-forward=false --interactive=false: (1m17.034394582s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6cd6474bc-pjv6h" [c3a8de7c-d42a-40cb-96a2-34b258866114] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 5.019473884s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-77c8458d95-s4dqj" [c54743f9-d425-4940-ab44-f45681ed1447] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.010713701s
helpers_test.go:175: Cleaning up "skaffold-936532" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-936532
--- PASS: TestSkaffold (143.53s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (191.86s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.2235190726.exe start -p running-upgrade-514918 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.2235190726.exe start -p running-upgrade-514918 --memory=2200 --vm-driver=kvm2 : (2m22.38384708s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-514918 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E1117 16:40:20.420061   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:20.425401   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:20.435749   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:20.456076   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:20.496402   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:20.576935   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:20.737718   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:21.058385   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:40:21.698890   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
version_upgrade_test.go:143: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-514918 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (46.35404459s)
helpers_test.go:175: Cleaning up "running-upgrade-514918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-514918
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-514918: (1.511518255s)
--- PASS: TestRunningBinaryUpgrade (191.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (200.7s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-943700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-943700 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2 : (1m39.28698997s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-943700
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-943700: (12.140498225s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-943700 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-943700 status --format={{.Host}}: exit status 7 (114.641057ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-943700 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
E1117 16:37:32.146721   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-943700 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (48.165779065s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-943700 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-943700 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-943700 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2 : exit status 106 (466.210583ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-943700] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-943700
	    minikube start -p kubernetes-upgrade-943700 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9437002 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.3, by running:
	    
	    minikube start -p kubernetes-upgrade-943700 --kubernetes-version=v1.28.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-943700 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-943700 --memory=2200 --kubernetes-version=v1.28.3 --alsologtostderr -v=1 --driver=kvm2 : (38.765774165s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-943700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-943700
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-943700: (1.68380751s)
--- PASS: TestKubernetesUpgrade (200.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (236.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2896662547.exe start -p stopped-upgrade-083805 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2896662547.exe start -p stopped-upgrade-083805 --memory=2200 --vm-driver=kvm2 : (1m51.693669383s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2896662547.exe -p stopped-upgrade-083805 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2896662547.exe -p stopped-upgrade-083805 stop: (12.355462412s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-083805 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-083805 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m52.508137005s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (236.56s)

                                                
                                    
x
+
TestPause/serial/Start (118.74s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-160002 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-160002 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m58.743941507s)
--- PASS: TestPause/serial/Start (118.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-083805
E1117 16:40:22.979293   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
version_upgrade_test.go:219: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-083805: (1.383893697s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801524 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-801524 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (78.939133ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-801524] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17634
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17634-9353/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17634-9353/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (59.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801524 --driver=kvm2 
E1117 16:40:25.540324   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-801524 --driver=kvm2 : (59.273548701s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-801524 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (59.57s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-160002 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-160002 --alsologtostderr -v=1 --driver=kvm2 : (59.463946858s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (59.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (32.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801524 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-801524 --no-kubernetes --driver=kvm2 : (30.980225261s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-801524 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-801524 status -o json: exit status 2 (273.122603ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-801524","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-801524
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-801524: (1.042130154s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (32.30s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-160002 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-160002 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-160002 --output=json --layout=cluster: exit status 2 (295.829718ms)

                                                
                                                
-- stdout --
	{"Name":"pause-160002","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-160002","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-160002 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-160002 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.69s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-160002 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-160002 --alsologtostderr -v=5: (1.059675292s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (78.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801524 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-801524 --no-kubernetes --driver=kvm2 : (1m18.745724954s)
--- PASS: TestNoKubernetes/serial/Start (78.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-801524 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-801524 "sudo systemctl is-active --quiet service kubelet": exit status 1 (240.826223ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-801524
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-801524: (2.196805034s)
--- PASS: TestNoKubernetes/serial/Stop (2.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (47.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-801524 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-801524 --driver=kvm2 : (47.217283557s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (47.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-055844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
E1117 16:43:35.528784   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:35.534155   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:35.544444   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:35.564738   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:35.605091   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:35.685456   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:35.845932   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:36.166532   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:36.807485   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:38.088553   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:40.648751   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:45.769453   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:43:56.010270   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-055844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (2m44.875603344s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-801524 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-801524 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.208979ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (144.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-614434 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-614434 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.28.3: (2m24.484857872s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (144.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (119.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-993837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
E1117 16:44:16.490992   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:44:57.451200   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-993837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (1m59.502622101s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (119.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-215145 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-215145 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (1m14.856034256s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-055844 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2e27750a-3b37-497d-b33c-f82ae892734d] Pending
helpers_test.go:344: "busybox" [2e27750a-3b37-497d-b33c-f82ae892734d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2e27750a-3b37-497d-b33c-f82ae892734d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.03231705s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-055844 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-993837 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [714f8bde-a17a-45ed-aeb3-1659cfb71218] Pending
helpers_test.go:344: "busybox" [714f8bde-a17a-45ed-aeb3-1659cfb71218] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [714f8bde-a17a-45ed-aeb3-1659cfb71218] Running
E1117 16:46:19.224254   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:46:19.372033   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.03781373s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-993837 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-055844 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-055844 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-055844 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-055844 --alsologtostderr -v=3: (13.369655977s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-993837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-993837 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (3.93748862s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-993837 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (4.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-993837 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-993837 --alsologtostderr -v=3: (13.151308284s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-055844 -n old-k8s-version-055844
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-055844 -n old-k8s-version-055844: exit status 7 (97.860159ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-055844 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (443.81s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-055844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-055844 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.16.0: (7m23.411735653s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-055844 -n old-k8s-version-055844
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (443.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-614434 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [db36e74a-f383-467b-b488-90aef109e6c3] Pending
helpers_test.go:344: "busybox" [db36e74a-f383-467b-b488-90aef109e6c3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [db36e74a-f383-467b-b488-90aef109e6c3] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.045269749s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-614434 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-993837 -n embed-certs-993837
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-993837 -n embed-certs-993837: exit status 7 (95.382056ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-993837 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (342.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-993837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-993837 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.28.3: (5m42.163285728s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-993837 -n embed-certs-993837
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (342.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-614434 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-614434 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.221451409s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-614434 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-614434 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-614434 --alsologtostderr -v=3: (13.134894717s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-614434 -n no-preload-614434: exit status 7 (118.585357ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-614434 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-215145 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0d2f5fb9-d5cb-42b8-a0b1-7b69676ffcdd] Pending
helpers_test.go:344: "busybox" [0d2f5fb9-d5cb-42b8-a0b1-7b69676ffcdd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0d2f5fb9-d5cb-42b8-a0b1-7b69676ffcdd] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.025209067s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-215145 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-215145 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-215145 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.127401932s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-215145 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-215145 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-215145 --alsologtostderr -v=3: (13.132776028s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145: exit status 7 (103.892257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-215145 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (310.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-215145 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3
E1117 16:47:32.146607   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-215145 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.28.3: (5m9.738349179s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (310.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (82.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-570037 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1117 16:48:35.527372   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
E1117 16:49:03.213117   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/gvisor-961990/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-570037 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (1m22.458662033s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (82.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-570037 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-570037 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.117588304s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-570037 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-570037 --alsologtostderr -v=3: (13.12722835s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-570037 -n newest-cni-570037
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-570037 -n newest-cni-570037: exit status 7 (85.665677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-570037 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (45.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-570037 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3
E1117 16:49:22.268116   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-570037 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.28.3: (45.41622172s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-570037 -n newest-cni-570037
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (45.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-570037 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-570037 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-570037 -n newest-cni-570037
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-570037 -n newest-cni-570037: exit status 2 (281.557396ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-570037 -n newest-cni-570037
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-570037 -n newest-cni-570037: exit status 2 (285.608253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-570037 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-570037 -n newest-cni-570037
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-570037 -n newest-cni-570037
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (69.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E1117 16:50:20.420984   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
E1117 16:50:47.395809   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m9.166324186s)
--- PASS: TestNetworkPlugins/group/auto/Start (69.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-081012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7zf57" [2c942d1a-591f-4572-822e-6e3cde1c2880] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1117 16:51:19.223389   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-7zf57" [2c942d1a-591f-4572-822e-6e3cde1c2880] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.011084389s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
E1117 16:51:53.030547   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/no-preload-614434/client.crt: no such file or directory
E1117 16:52:13.511675   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/no-preload-614434/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m22.920562103s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmw5j" [da4e6c74-a38a-4247-a6e7-b8610109f518] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1117 16:52:32.145805   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmw5j" [da4e6c74-a38a-4247-a6e7-b8610109f518] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.027045653s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (16.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-jmw5j" [da4e6c74-a38a-4247-a6e7-b8610109f518] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018547212s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-993837 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v8trw" [b600bb3a-f2eb-42df-93b0-487b494823f7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.025014866s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-993837 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-993837 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-993837 -n embed-certs-993837
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-993837 -n embed-certs-993837: exit status 2 (330.663916ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-993837 -n embed-certs-993837
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-993837 -n embed-certs-993837: exit status 2 (277.605412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-993837 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-993837 -n embed-certs-993837
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-993837 -n embed-certs-993837
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-8694d4445c-v8trw" [b600bb3a-f2eb-42df-93b0-487b494823f7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01613703s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-215145 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m44.663220755s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-215145 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-215145 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145: exit status 2 (461.939045ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145: exit status 2 (395.97043ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-215145 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145
E1117 16:52:54.473132   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/no-preload-614434/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-215145 -n default-k8s-diff-port-215145
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (108.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m48.193964006s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (108.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-fcvvp" [89b2ccd2-8640-4b47-95ef-b58e7d70e6f4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.019488636s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-081012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8tgf5" [4d1718ec-cae1-4ff2-96fa-a641ad044a68] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8tgf5" [4d1718ec-cae1-4ff2-96fa-a641ad044a68] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.070770472s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (86.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m26.54984232s)
--- PASS: TestNetworkPlugins/group/false/Start (86.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-x8jwk" [6c749da1-3cef-4e72-8236-d8834a7c30db] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.022918498s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-x8jwk" [6c749da1-3cef-4e72-8236-d8834a7c30db] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016197292s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-055844 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-055844 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-055844 -n old-k8s-version-055844
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-055844 -n old-k8s-version-055844: exit status 2 (292.581868ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-055844 -n old-k8s-version-055844
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-055844 -n old-k8s-version-055844: exit status 2 (304.203082ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-055844 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-055844 -n old-k8s-version-055844
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-055844 -n old-k8s-version-055844
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.86s)
E1117 16:56:05.586717   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:05.627075   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:05.707458   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:05.868371   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:06.189478   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:06.830087   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:08.110772   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:10.671155   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:15.792164   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:18.021092   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:18.026379   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:18.036709   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:18.056961   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:18.097322   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:18.177772   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:18.338292   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:18.658470   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:19.223732   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/ingress-addon-legacy-533209/client.crt: no such file or directory
E1117 16:56:19.299329   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:20.579503   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:23.139951   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:26.032525   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
E1117 16:56:28.260170   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
E1117 16:56:32.550036   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/no-preload-614434/client.crt: no such file or directory
E1117 16:56:38.500338   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
E1117 16:54:16.393407   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/no-preload-614434/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m25.749767034s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-mv2hw" [0daf6f8e-2a1d-476c-ab35-824bcc29cb9f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.0314397s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-081012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-m2pjt" [1c8721d0-6f09-4ce9-897c-3ce90ab923b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-m2pjt" [1c8721d0-6f09-4ce9-897c-3ce90ab923b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.021495629s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-081012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-k2tkx" [7dd46ebb-a373-41bd-b37e-945419fa03e1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-k2tkx" [7dd46ebb-a373-41bd-b37e-945419fa03e1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.013549306s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-081012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-s692r" [9ba14eed-3952-4688-a650-9ce70dc5da0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-s692r" [9ba14eed-3952-4688-a650-9ce70dc5da0d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.01740879s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (85.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m25.632195681s)
--- PASS: TestNetworkPlugins/group/flannel/Start (85.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (97.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
E1117 16:55:20.420206   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m37.877054954s)
--- PASS: TestNetworkPlugins/group/bridge/Start (97.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-081012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-mx5b5" [551de3d5-9e86-4fa6-a579-48efccc79e3f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-mx5b5" [551de3d5-9e86-4fa6-a579-48efccc79e3f] Running
E1117 16:55:47.395919   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/addons-051402/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.010792483s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (101.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-081012 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m41.824519054s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (101.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vnrhk" [0d0d2f30-002b-4f7b-a71d-c6cbd0dd5128] Running
E1117 16:56:43.464854   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/skaffold-936532/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.020359813s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-081012 replace --force -f testdata/netcat-deployment.yaml
E1117 16:56:46.512975   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/old-k8s-version-055844/client.crt: no such file or directory
net_test.go:149: (dbg) Done: kubectl --context flannel-081012 replace --force -f testdata/netcat-deployment.yaml: (1.262875101s)
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-67ms5" [932bcfee-6ad5-47dc-bfe9-93c32c57d921] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-67ms5" [932bcfee-6ad5-47dc-bfe9-93c32c57d921] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.021532973s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-081012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-2nwz6" [eddc8bbe-0257-41b5-8823-db6b92bd20b5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1117 16:56:58.981437   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/auto-081012/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-2nwz6" [eddc8bbe-0257-41b5-8823-db6b92bd20b5] Running
E1117 16:57:07.209368   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:07.214676   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:07.224906   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:07.245159   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:07.285503   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:07.365922   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:07.526405   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:07.846814   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:08.487901   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
E1117 16:57:09.768404   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/default-k8s-diff-port-215145/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.014083894s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E1117 16:57:00.233782   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/no-preload-614434/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-081012 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-081012 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-htxwz" [03a0995b-94f5-4c6c-a4b8-ad40afae485d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-htxwz" [03a0995b-94f5-4c6c-a4b8-ad40afae485d] Running
E1117 16:57:32.145737   16558 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/functional-074045/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.011168404s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-081012 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-081012 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    

Test skip (31/321)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.3/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-623158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-623158
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-081012 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-081012" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17634-9353/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Nov 2023 16:40:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.50:8443
name: pause-160002
contexts:
- context:
cluster: pause-160002
extensions:
- extension:
last-update: Fri, 17 Nov 2023 16:40:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-160002
name: pause-160002
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-160002
user:
client-certificate: /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/pause-160002/client.crt
client-key: /home/jenkins/minikube-integration/17634-9353/.minikube/profiles/pause-160002/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-081012

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-081012" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-081012"

                                                
                                                
----------------------- debugLogs end: cilium-081012 [took: 4.10806799s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-081012" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-081012
--- SKIP: TestNetworkPlugins/group/cilium (4.31s)

                                                
                                    
Copied to clipboard