Test Report: KVM_Linux 18998

                    
                      e8d3a518ce9b98b9e9fc9f8b62f75f3019a13e07:2024-07-04:35167
                    
                

Test fail (1/341)

Order failed test Duration
82 TestFunctional/serial/ComponentHealth 1.58
x
+
TestFunctional/serial/ComponentHealth (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-377836 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.219 PodIP:192.168.39.219 StartTime:2024-07-03 22:57:48 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc001f0f068 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-apiserver:v1.30.2 ImageID:docker-pullable://registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d ContainerID:docker://0790dd5ddc5ea977a68ed1752c2402bd2edd431104d0d2889326b8b61e057862}]}
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-377836 -n functional-377836
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 logs -n 25
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-147129 --log_dir                                                  | nospam-147129     | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | /tmp/nospam-147129 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-147129 --log_dir                                                  | nospam-147129     | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | /tmp/nospam-147129 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-147129 --log_dir                                                  | nospam-147129     | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | /tmp/nospam-147129 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-147129 --log_dir                                                  | nospam-147129     | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | /tmp/nospam-147129 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-147129 --log_dir                                                  | nospam-147129     | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | /tmp/nospam-147129 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-147129 --log_dir                                                  | nospam-147129     | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	|         | /tmp/nospam-147129 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-147129                                                         | nospam-147129     | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:53 UTC |
	| start   | -p functional-377836                                                     | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:53 UTC | 03 Jul 24 22:55 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	| start   | -p functional-377836                                                     | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-377836 cache add                                              | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-377836 cache add                                              | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-377836 cache add                                              | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-377836 cache add                                              | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | minikube-local-cache-test:functional-377836                              |                   |         |         |                     |                     |
	| cache   | functional-377836 cache delete                                           | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | minikube-local-cache-test:functional-377836                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	| ssh     | functional-377836 ssh sudo                                               | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-377836                                                        | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	|         | ssh sudo docker rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-377836 ssh                                                    | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-377836 cache reload                                           | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:55 UTC | 03 Jul 24 22:55 UTC |
	| ssh     | functional-377836 ssh                                                    | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:56 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:56 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:56 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-377836 kubectl --                                             | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:56 UTC |
	|         | --context functional-377836                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-377836                                                     | functional-377836 | jenkins | v1.33.1 | 03 Jul 24 22:56 UTC | 03 Jul 24 22:57 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 22:56:00
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 22:56:00.510702   22400 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:56:00.510928   22400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:56:00.510931   22400 out.go:304] Setting ErrFile to fd 2...
	I0703 22:56:00.510934   22400 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:56:00.511089   22400 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	I0703 22:56:00.511579   22400 out.go:298] Setting JSON to false
	I0703 22:56:00.512393   22400 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2305,"bootTime":1720045055,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:56:00.512467   22400 start.go:139] virtualization: kvm guest
	I0703 22:56:00.514487   22400 out.go:177] * [functional-377836] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 22:56:00.515747   22400 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 22:56:00.515754   22400 notify.go:220] Checking for updates...
	I0703 22:56:00.518152   22400 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:56:00.519330   22400 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	I0703 22:56:00.520495   22400 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	I0703 22:56:00.521611   22400 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 22:56:00.522783   22400 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 22:56:00.524220   22400 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 22:56:00.524282   22400 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 22:56:00.524703   22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:56:00.524750   22400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:56:00.539191   22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0703 22:56:00.539530   22400 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:56:00.540031   22400 main.go:141] libmachine: Using API Version  1
	I0703 22:56:00.540044   22400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:56:00.540405   22400 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:56:00.540561   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:00.570317   22400 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 22:56:00.571392   22400 start.go:297] selected driver: kvm2
	I0703 22:56:00.571398   22400 start.go:901] validating driver "kvm2" against &{Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:56:00.571491   22400 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 22:56:00.571790   22400 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:56:00.571837   22400 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 22:56:00.585798   22400 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 22:56:00.586484   22400 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 22:56:00.586534   22400 cni.go:84] Creating CNI manager for ""
	I0703 22:56:00.586545   22400 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0703 22:56:00.586593   22400 start.go:340] cluster config:
	{Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.2 ClusterName:functional-377836 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:56:00.586682   22400 iso.go:125] acquiring lock: {Name:mke39b31a4a84d7efedf67d51c801ff7cd79d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:56:00.588567   22400 out.go:177] * Starting "functional-377836" primary control-plane node in "functional-377836" cluster
	I0703 22:56:00.589544   22400 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 22:56:00.589568   22400 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
	I0703 22:56:00.589573   22400 cache.go:56] Caching tarball of preloaded images
	I0703 22:56:00.589645   22400 preload.go:173] Found /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0703 22:56:00.589650   22400 cache.go:59] Finished verifying existence of preloaded tar for v1.30.2 on docker
	I0703 22:56:00.589724   22400 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/config.json ...
	I0703 22:56:00.589898   22400 start.go:360] acquireMachinesLock for functional-377836: {Name:mk0c7b3619f676bfb46d9cc345dd57d32a1f7d69 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0703 22:56:00.589935   22400 start.go:364] duration metric: took 27.079µs to acquireMachinesLock for "functional-377836"
	I0703 22:56:00.589944   22400 start.go:96] Skipping create...Using existing machine configuration
	I0703 22:56:00.589951   22400 fix.go:54] fixHost starting: 
	I0703 22:56:00.590201   22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:56:00.590231   22400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:56:00.603145   22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46073
	I0703 22:56:00.603508   22400 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:56:00.603939   22400 main.go:141] libmachine: Using API Version  1
	I0703 22:56:00.603953   22400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:56:00.604209   22400 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:56:00.604345   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:00.604472   22400 main.go:141] libmachine: (functional-377836) Calling .GetState
	I0703 22:56:00.605840   22400 fix.go:112] recreateIfNeeded on functional-377836: state=Running err=<nil>
	W0703 22:56:00.605853   22400 fix.go:138] unexpected machine state, will restart: <nil>
	I0703 22:56:00.607164   22400 out.go:177] * Updating the running kvm2 "functional-377836" VM ...
	I0703 22:56:00.608157   22400 machine.go:94] provisionDockerMachine start ...
	I0703 22:56:00.608166   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:00.608319   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:00.610313   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.610589   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:00.610610   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.610791   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:00.610920   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:00.611041   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:00.611149   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:00.611250   22400 main.go:141] libmachine: Using SSH client type: native
	I0703 22:56:00.611406   22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0703 22:56:00.611411   22400 main.go:141] libmachine: About to run SSH command:
	hostname
	I0703 22:56:00.717240   22400 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-377836
	
	I0703 22:56:00.717255   22400 main.go:141] libmachine: (functional-377836) Calling .GetMachineName
	I0703 22:56:00.717481   22400 buildroot.go:166] provisioning hostname "functional-377836"
	I0703 22:56:00.717496   22400 main.go:141] libmachine: (functional-377836) Calling .GetMachineName
	I0703 22:56:00.717647   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:00.720132   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.720444   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:00.720462   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.720574   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:00.720745   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:00.720859   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:00.720987   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:00.721117   22400 main.go:141] libmachine: Using SSH client type: native
	I0703 22:56:00.721291   22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0703 22:56:00.721300   22400 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-377836 && echo "functional-377836" | sudo tee /etc/hostname
	I0703 22:56:00.840292   22400 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-377836
	
	I0703 22:56:00.840330   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:00.842697   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.843009   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:00.843023   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.843185   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:00.843343   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:00.843459   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:00.843617   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:00.843724   22400 main.go:141] libmachine: Using SSH client type: native
	I0703 22:56:00.843870   22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0703 22:56:00.843880   22400 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-377836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-377836/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-377836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0703 22:56:00.949561   22400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 22:56:00.949576   22400 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18998-9391/.minikube CaCertPath:/home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18998-9391/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18998-9391/.minikube}
	I0703 22:56:00.949600   22400 buildroot.go:174] setting up certificates
	I0703 22:56:00.949607   22400 provision.go:84] configureAuth start
	I0703 22:56:00.949614   22400 main.go:141] libmachine: (functional-377836) Calling .GetMachineName
	I0703 22:56:00.949829   22400 main.go:141] libmachine: (functional-377836) Calling .GetIP
	I0703 22:56:00.952036   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.952422   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:00.952458   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.952488   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:00.954553   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.954814   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:00.954838   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:00.954966   22400 provision.go:143] copyHostCerts
	I0703 22:56:00.955013   22400 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9391/.minikube/ca.pem, removing ...
	I0703 22:56:00.955019   22400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9391/.minikube/ca.pem
	I0703 22:56:00.955091   22400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18998-9391/.minikube/ca.pem (1082 bytes)
	I0703 22:56:00.955191   22400 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9391/.minikube/cert.pem, removing ...
	I0703 22:56:00.955196   22400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9391/.minikube/cert.pem
	I0703 22:56:00.955232   22400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18998-9391/.minikube/cert.pem (1123 bytes)
	I0703 22:56:00.955295   22400 exec_runner.go:144] found /home/jenkins/minikube-integration/18998-9391/.minikube/key.pem, removing ...
	I0703 22:56:00.955300   22400 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18998-9391/.minikube/key.pem
	I0703 22:56:00.955325   22400 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18998-9391/.minikube/key.pem (1675 bytes)
	I0703 22:56:00.955380   22400 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18998-9391/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca-key.pem org=jenkins.functional-377836 san=[127.0.0.1 192.168.39.219 functional-377836 localhost minikube]
	I0703 22:56:01.131586   22400 provision.go:177] copyRemoteCerts
	I0703 22:56:01.131631   22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0703 22:56:01.131655   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:01.134435   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.134767   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.134786   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.134948   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:01.135121   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.135284   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:01.135412   22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
	I0703 22:56:01.215216   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0703 22:56:01.240412   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0703 22:56:01.265081   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0703 22:56:01.288832   22400 provision.go:87] duration metric: took 339.215018ms to configureAuth
	I0703 22:56:01.288850   22400 buildroot.go:189] setting minikube options for container-runtime
	I0703 22:56:01.289059   22400 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 22:56:01.289075   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:01.289337   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:01.291471   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.291798   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.291827   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.291910   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:01.292093   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.292242   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.292387   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:01.292529   22400 main.go:141] libmachine: Using SSH client type: native
	I0703 22:56:01.292665   22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0703 22:56:01.292670   22400 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0703 22:56:01.398770   22400 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0703 22:56:01.398780   22400 buildroot.go:70] root file system type: tmpfs
	I0703 22:56:01.398881   22400 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0703 22:56:01.398897   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:01.401565   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.401882   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.401916   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.402064   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:01.402196   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.402338   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.402405   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:01.402497   22400 main.go:141] libmachine: Using SSH client type: native
	I0703 22:56:01.402677   22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0703 22:56:01.402731   22400 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0703 22:56:01.535248   22400 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0703 22:56:01.535278   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:01.537572   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.537901   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.537915   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.538056   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:01.538198   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.538343   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.538482   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:01.538602   22400 main.go:141] libmachine: Using SSH client type: native
	I0703 22:56:01.538814   22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0703 22:56:01.538829   22400 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0703 22:56:01.647612   22400 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0703 22:56:01.647634   22400 machine.go:97] duration metric: took 1.039471957s to provisionDockerMachine
	I0703 22:56:01.647642   22400 start.go:293] postStartSetup for "functional-377836" (driver="kvm2")
	I0703 22:56:01.647649   22400 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0703 22:56:01.647661   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:01.647931   22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0703 22:56:01.647947   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:01.650716   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.651035   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.651056   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.651191   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:01.651382   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.651516   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:01.651648   22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
	I0703 22:56:01.738442   22400 ssh_runner.go:195] Run: cat /etc/os-release
	I0703 22:56:01.743220   22400 info.go:137] Remote host: Buildroot 2023.02.9
	I0703 22:56:01.743233   22400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9391/.minikube/addons for local assets ...
	I0703 22:56:01.743297   22400 filesync.go:126] Scanning /home/jenkins/minikube-integration/18998-9391/.minikube/files for local assets ...
	I0703 22:56:01.743357   22400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/ssl/certs/166762.pem -> 166762.pem in /etc/ssl/certs
	I0703 22:56:01.743417   22400 filesync.go:149] local asset: /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/test/nested/copy/16676/hosts -> hosts in /etc/test/nested/copy/16676
	I0703 22:56:01.743445   22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/16676
	I0703 22:56:01.754934   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/ssl/certs/166762.pem --> /etc/ssl/certs/166762.pem (1708 bytes)
	I0703 22:56:01.783656   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/test/nested/copy/16676/hosts --> /etc/test/nested/copy/16676/hosts (40 bytes)
	I0703 22:56:01.813227   22400 start.go:296] duration metric: took 165.576258ms for postStartSetup
	I0703 22:56:01.813249   22400 fix.go:56] duration metric: took 1.223301149s for fixHost
	I0703 22:56:01.813264   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:01.816280   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.816637   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.816660   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.816808   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:01.816965   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.817113   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.817251   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:01.817388   22400 main.go:141] libmachine: Using SSH client type: native
	I0703 22:56:01.817534   22400 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d960] 0x8306c0 <nil>  [] 0s} 192.168.39.219 22 <nil> <nil>}
	I0703 22:56:01.817539   22400 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0703 22:56:01.921633   22400 main.go:141] libmachine: SSH cmd err, output: <nil>: 1720047361.898144190
	
	I0703 22:56:01.921649   22400 fix.go:216] guest clock: 1720047361.898144190
	I0703 22:56:01.921657   22400 fix.go:229] Guest: 2024-07-03 22:56:01.89814419 +0000 UTC Remote: 2024-07-03 22:56:01.813250822 +0000 UTC m=+1.336205740 (delta=84.893368ms)
	I0703 22:56:01.921693   22400 fix.go:200] guest clock delta is within tolerance: 84.893368ms
	I0703 22:56:01.921699   22400 start.go:83] releasing machines lock for "functional-377836", held for 1.331758498s
	I0703 22:56:01.921725   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:01.921996   22400 main.go:141] libmachine: (functional-377836) Calling .GetIP
	I0703 22:56:01.924305   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.924629   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.924644   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.924760   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:01.925216   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:01.925391   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:01.925471   22400 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0703 22:56:01.925520   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:01.925578   22400 ssh_runner.go:195] Run: cat /version.json
	I0703 22:56:01.925593   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:01.927832   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.928115   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.928143   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.928218   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.928259   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:01.928426   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.928561   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:01.928614   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:01.928630   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:01.928673   22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
	I0703 22:56:01.928804   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:01.928949   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:01.929092   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:01.929247   22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
	I0703 22:56:02.025188   22400 ssh_runner.go:195] Run: systemctl --version
	I0703 22:56:02.031157   22400 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0703 22:56:02.037051   22400 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0703 22:56:02.037091   22400 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0703 22:56:02.046401   22400 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0703 22:56:02.046415   22400 start.go:494] detecting cgroup driver to use...
	I0703 22:56:02.046513   22400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 22:56:02.065422   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0703 22:56:02.076869   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0703 22:56:02.087527   22400 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0703 22:56:02.087568   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0703 22:56:02.103875   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0703 22:56:02.113888   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0703 22:56:02.123993   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0703 22:56:02.134193   22400 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0703 22:56:02.145197   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0703 22:56:02.155637   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0703 22:56:02.166050   22400 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0703 22:56:02.176160   22400 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0703 22:56:02.185582   22400 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0703 22:56:02.195001   22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:56:02.377559   22400 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0703 22:56:02.403790   22400 start.go:494] detecting cgroup driver to use...
	I0703 22:56:02.403849   22400 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0703 22:56:02.421212   22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 22:56:02.436479   22400 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0703 22:56:02.459419   22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0703 22:56:02.474687   22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0703 22:56:02.487208   22400 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0703 22:56:02.505209   22400 ssh_runner.go:195] Run: which cri-dockerd
	I0703 22:56:02.508898   22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0703 22:56:02.517695   22400 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0703 22:56:02.533976   22400 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0703 22:56:02.690896   22400 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0703 22:56:02.855888   22400 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0703 22:56:02.855988   22400 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0703 22:56:02.873313   22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:56:03.029389   22400 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0703 22:56:15.723723   22400 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.694306107s)
	I0703 22:56:15.723786   22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0703 22:56:15.740703   22400 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0703 22:56:15.764390   22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0703 22:56:15.777109   22400 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0703 22:56:15.894738   22400 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0703 22:56:16.026121   22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:56:16.159765   22400 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0703 22:56:16.176948   22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0703 22:56:16.189646   22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:56:16.307121   22400 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0703 22:56:16.411260   22400 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0703 22:56:16.411322   22400 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0703 22:56:16.417973   22400 start.go:562] Will wait 60s for crictl version
	I0703 22:56:16.418002   22400 ssh_runner.go:195] Run: which crictl
	I0703 22:56:16.423655   22400 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0703 22:56:16.459234   22400 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0703 22:56:16.459290   22400 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0703 22:56:16.480430   22400 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0703 22:56:16.502918   22400 out.go:204] * Preparing Kubernetes v1.30.2 on Docker 27.0.3 ...
	I0703 22:56:16.502958   22400 main.go:141] libmachine: (functional-377836) Calling .GetIP
	I0703 22:56:16.505637   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:16.505995   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:16.506011   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:16.506178   22400 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0703 22:56:16.511629   22400 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0703 22:56:16.512955   22400 kubeadm.go:877] updating cluster {Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false M
ountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0703 22:56:16.513046   22400 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
	I0703 22:56:16.513085   22400 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0703 22:56:16.530936   22400 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-377836
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0703 22:56:16.530943   22400 docker.go:615] Images already preloaded, skipping extraction
	I0703 22:56:16.530975   22400 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0703 22:56:16.548145   22400 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-377836
	registry.k8s.io/kube-apiserver:v1.30.2
	registry.k8s.io/kube-controller-manager:v1.30.2
	registry.k8s.io/kube-scheduler:v1.30.2
	registry.k8s.io/kube-proxy:v1.30.2
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0703 22:56:16.548152   22400 cache_images.go:84] Images are preloaded, skipping loading
	I0703 22:56:16.548158   22400 kubeadm.go:928] updating node { 192.168.39.219 8441 v1.30.2 docker true true} ...
	I0703 22:56:16.548246   22400 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-377836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.219
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0703 22:56:16.548284   22400 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0703 22:56:16.573639   22400 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0703 22:56:16.573694   22400 cni.go:84] Creating CNI manager for ""
	I0703 22:56:16.573707   22400 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0703 22:56:16.573714   22400 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0703 22:56:16.573730   22400 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.219 APIServerPort:8441 KubernetesVersion:v1.30.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-377836 NodeName:functional-377836 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.219"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.219 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfi
gOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0703 22:56:16.573853   22400 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.219
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-377836"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.219
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.219"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0703 22:56:16.573890   22400 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.2
	I0703 22:56:16.583317   22400 binaries.go:44] Found k8s binaries, skipping transfer
	I0703 22:56:16.583359   22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0703 22:56:16.592495   22400 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0703 22:56:16.608332   22400 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0703 22:56:16.624483   22400 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2015 bytes)
	I0703 22:56:16.639987   22400 ssh_runner.go:195] Run: grep 192.168.39.219	control-plane.minikube.internal$ /etc/hosts
	I0703 22:56:16.643649   22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:56:16.779958   22400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 22:56:16.836127   22400 certs.go:68] Setting up /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836 for IP: 192.168.39.219
	I0703 22:56:16.836138   22400 certs.go:194] generating shared ca certs ...
	I0703 22:56:16.836158   22400 certs.go:226] acquiring lock for ca certs: {Name:mkf6614f3bbac218620dd9f7f5d0832f57cc4a9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:56:16.836311   22400 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18998-9391/.minikube/ca.key
	I0703 22:56:16.836344   22400 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18998-9391/.minikube/proxy-client-ca.key
	I0703 22:56:16.836349   22400 certs.go:256] generating profile certs ...
	I0703 22:56:16.836445   22400 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.key
	I0703 22:56:16.836499   22400 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/apiserver.key.656cd1b8
	I0703 22:56:16.836545   22400 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/proxy-client.key
	I0703 22:56:16.836649   22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/16676.pem (1338 bytes)
	W0703 22:56:16.836670   22400 certs.go:480] ignoring /home/jenkins/minikube-integration/18998-9391/.minikube/certs/16676_empty.pem, impossibly tiny 0 bytes
	I0703 22:56:16.836676   22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca-key.pem (1679 bytes)
	I0703 22:56:16.836696   22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/ca.pem (1082 bytes)
	I0703 22:56:16.836712   22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/cert.pem (1123 bytes)
	I0703 22:56:16.836728   22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/certs/key.pem (1675 bytes)
	I0703 22:56:16.836757   22400 certs.go:484] found cert: /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/ssl/certs/166762.pem (1708 bytes)
	I0703 22:56:16.837361   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0703 22:56:16.911461   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0703 22:56:16.983431   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0703 22:56:17.038715   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0703 22:56:17.082551   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0703 22:56:17.144530   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0703 22:56:17.189293   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0703 22:56:17.229606   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0703 22:56:17.263630   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/certs/16676.pem --> /usr/share/ca-certificates/16676.pem (1338 bytes)
	I0703 22:56:17.325075   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/ssl/certs/166762.pem --> /usr/share/ca-certificates/166762.pem (1708 bytes)
	I0703 22:56:17.365727   22400 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18998-9391/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0703 22:56:17.400421   22400 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0703 22:56:17.425695   22400 ssh_runner.go:195] Run: openssl version
	I0703 22:56:17.432255   22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16676.pem && ln -fs /usr/share/ca-certificates/16676.pem /etc/ssl/certs/16676.pem"
	I0703 22:56:17.446984   22400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16676.pem
	I0703 22:56:17.452267   22400 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul  3 22:53 /usr/share/ca-certificates/16676.pem
	I0703 22:56:17.452312   22400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16676.pem
	I0703 22:56:17.462306   22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16676.pem /etc/ssl/certs/51391683.0"
	I0703 22:56:17.485079   22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166762.pem && ln -fs /usr/share/ca-certificates/166762.pem /etc/ssl/certs/166762.pem"
	I0703 22:56:17.510278   22400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166762.pem
	I0703 22:56:17.519901   22400 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul  3 22:53 /usr/share/ca-certificates/166762.pem
	I0703 22:56:17.519934   22400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166762.pem
	I0703 22:56:17.525937   22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166762.pem /etc/ssl/certs/3ec20f2e.0"
	I0703 22:56:17.544227   22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0703 22:56:17.562056   22400 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:56:17.568131   22400 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul  3 22:47 /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:56:17.568157   22400 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0703 22:56:17.584913   22400 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0703 22:56:17.610103   22400 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0703 22:56:17.620833   22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0703 22:56:17.629374   22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0703 22:56:17.654991   22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0703 22:56:17.672985   22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0703 22:56:17.694093   22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0703 22:56:17.702460   22400 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0703 22:56:17.709716   22400 kubeadm.go:391] StartCluster: {Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30
.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Moun
tString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:56:17.709856   22400 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0703 22:56:17.728429   22400 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0703 22:56:17.741501   22400 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0703 22:56:17.741512   22400 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0703 22:56:17.741517   22400 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0703 22:56:17.741561   22400 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0703 22:56:17.761221   22400 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0703 22:56:17.761720   22400 kubeconfig.go:125] found "functional-377836" server: "https://192.168.39.219:8441"
	I0703 22:56:17.762775   22400 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0703 22:56:17.776283   22400 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.219"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0703 22:56:17.776289   22400 kubeadm.go:1154] stopping kube-system containers ...
	I0703 22:56:17.776326   22400 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0703 22:56:17.839868   22400 docker.go:483] Stopping containers: [71c0b16f3679 b406ab73e5d2 d73ef8b96e2c f2953d08dacd fb0a93e6301f 757aac77b242 d19de58b969d 393afc286228 3720e138f218 d1d00023893a 1bf7856e6cc2 c307e26931c5 3180d83316a4 f8797e419a2e 5e029f1d16e3 81f88c33387f 35b39ff49ecf 59d9adb16464 f9cd97d184ab ede261f839ee 043bf1536424 f5f25bace2d8 28ad2448e774 d1dc51ed1398 c02af4c53647 40145ee83aa4 800da21bd3bc a169fa02b113 08541cc36205 a58b3f662ce2]
	I0703 22:56:17.839942   22400 ssh_runner.go:195] Run: docker stop 71c0b16f3679 b406ab73e5d2 d73ef8b96e2c f2953d08dacd fb0a93e6301f 757aac77b242 d19de58b969d 393afc286228 3720e138f218 d1d00023893a 1bf7856e6cc2 c307e26931c5 3180d83316a4 f8797e419a2e 5e029f1d16e3 81f88c33387f 35b39ff49ecf 59d9adb16464 f9cd97d184ab ede261f839ee 043bf1536424 f5f25bace2d8 28ad2448e774 d1dc51ed1398 c02af4c53647 40145ee83aa4 800da21bd3bc a169fa02b113 08541cc36205 a58b3f662ce2
	I0703 22:56:18.447345   22400 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0703 22:56:18.491875   22400 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0703 22:56:18.502739   22400 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 Jul  3 22:54 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5658 Jul  3 22:55 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Jul  3 22:54 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5606 Jul  3 22:55 /etc/kubernetes/scheduler.conf
	
	I0703 22:56:18.502786   22400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0703 22:56:18.513009   22400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0703 22:56:18.522810   22400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0703 22:56:18.533077   22400 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0703 22:56:18.533105   22400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0703 22:56:18.546260   22400 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0703 22:56:18.555250   22400 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0703 22:56:18.555284   22400 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0703 22:56:18.566055   22400 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0703 22:56:18.579386   22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 22:56:18.633477   22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 22:56:19.554513   22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0703 22:56:19.760922   22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 22:56:19.850105   22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0703 22:56:19.976563   22400 api_server.go:52] waiting for apiserver process to appear ...
	I0703 22:56:19.976641   22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 22:56:20.477503   22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 22:56:20.977473   22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 22:56:21.477069   22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 22:56:21.491782   22400 api_server.go:72] duration metric: took 1.515222902s to wait for apiserver process to appear ...
	I0703 22:56:21.491794   22400 api_server.go:88] waiting for apiserver healthz status ...
	I0703 22:56:21.491809   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:24.202043   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0703 22:56:24.202062   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0703 22:56:24.202073   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:24.223359   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0703 22:56:24.223376   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0703 22:56:24.492737   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:24.500880   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 22:56:24.500900   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 22:56:24.992493   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:25.001533   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 22:56:25.001546   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 22:56:25.492111   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:25.515234   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 22:56:25.515249   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 22:56:25.992563   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:25.997115   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 200:
	ok
	I0703 22:56:26.009758   22400 api_server.go:141] control plane version: v1.30.2
	I0703 22:56:26.009773   22400 api_server.go:131] duration metric: took 4.51797488s to wait for apiserver health ...
	I0703 22:56:26.009780   22400 cni.go:84] Creating CNI manager for ""
	I0703 22:56:26.009789   22400 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0703 22:56:26.011581   22400 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0703 22:56:26.012905   22400 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0703 22:56:26.029812   22400 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0703 22:56:26.053790   22400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 22:56:26.062249   22400 system_pods.go:59] 7 kube-system pods found
	I0703 22:56:26.062264   22400 system_pods.go:61] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0703 22:56:26.062273   22400 system_pods.go:61] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0703 22:56:26.062279   22400 system_pods.go:61] "kube-apiserver-functional-377836" [80bc54ed-3e0b-40c2-9e36-5889e4c30b1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0703 22:56:26.062284   22400 system_pods.go:61] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0703 22:56:26.062287   22400 system_pods.go:61] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:56:26.062290   22400 system_pods.go:61] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0703 22:56:26.062293   22400 system_pods.go:61] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:56:26.062297   22400 system_pods.go:74] duration metric: took 8.496972ms to wait for pod list to return data ...
	I0703 22:56:26.062302   22400 node_conditions.go:102] verifying NodePressure condition ...
	I0703 22:56:26.065106   22400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 22:56:26.065125   22400 node_conditions.go:123] node cpu capacity is 2
	I0703 22:56:26.065135   22400 node_conditions.go:105] duration metric: took 2.828996ms to run NodePressure ...
	I0703 22:56:26.065151   22400 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0703 22:56:26.362018   22400 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0703 22:56:26.373607   22400 kubeadm.go:733] kubelet initialised
	I0703 22:56:26.373615   22400 kubeadm.go:734] duration metric: took 11.58403ms waiting for restarted kubelet to initialise ...
	I0703 22:56:26.373621   22400 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 22:56:26.380076   22400 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:28.385303   22400 pod_ready.go:102] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"False"
	I0703 22:56:30.386552   22400 pod_ready.go:102] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"False"
	I0703 22:56:32.885744   22400 pod_ready.go:102] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"False"
	I0703 22:56:33.387117   22400 pod_ready.go:92] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:33.387126   22400 pod_ready.go:81] duration metric: took 7.007040826s for pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:33.387133   22400 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:34.893301   22400 pod_ready.go:92] pod "etcd-functional-377836" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:34.893312   22400 pod_ready.go:81] duration metric: took 1.506172571s for pod "etcd-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:34.893319   22400 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:36.899460   22400 pod_ready.go:102] pod "kube-apiserver-functional-377836" in "kube-system" namespace has status "Ready":"False"
	I0703 22:56:37.899596   22400 pod_ready.go:92] pod "kube-apiserver-functional-377836" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:37.899609   22400 pod_ready.go:81] duration metric: took 3.006283902s for pod "kube-apiserver-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:37.899620   22400 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.406186   22400 pod_ready.go:92] pod "kube-controller-manager-functional-377836" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:38.406198   22400 pod_ready.go:81] duration metric: took 506.571414ms for pod "kube-controller-manager-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.406205   22400 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pgfqk" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.410515   22400 pod_ready.go:92] pod "kube-proxy-pgfqk" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:38.410523   22400 pod_ready.go:81] duration metric: took 4.313563ms for pod "kube-proxy-pgfqk" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.410529   22400 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.414284   22400 pod_ready.go:92] pod "kube-scheduler-functional-377836" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:38.414291   22400 pod_ready.go:81] duration metric: took 3.757908ms for pod "kube-scheduler-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.414298   22400 pod_ready.go:38] duration metric: took 12.04067037s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 22:56:38.414311   22400 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0703 22:56:38.426236   22400 ops.go:34] apiserver oom_adj: -16
	I0703 22:56:38.426243   22400 kubeadm.go:591] duration metric: took 20.684721892s to restartPrimaryControlPlane
	I0703 22:56:38.426249   22400 kubeadm.go:393] duration metric: took 20.716542121s to StartCluster
	I0703 22:56:38.426267   22400 settings.go:142] acquiring lock: {Name:mka057d561020f5940ef3b848cb3bd46bcf2236f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:56:38.426329   22400 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18998-9391/kubeconfig
	I0703 22:56:38.427008   22400 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9391/kubeconfig: {Name:mk507e40fb0c0700be4af5efbc43c2602bfaff5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:56:38.427262   22400 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0703 22:56:38.427310   22400 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0703 22:56:38.427373   22400 addons.go:69] Setting storage-provisioner=true in profile "functional-377836"
	I0703 22:56:38.427400   22400 addons.go:234] Setting addon storage-provisioner=true in "functional-377836"
	W0703 22:56:38.427406   22400 addons.go:243] addon storage-provisioner should already be in state true
	I0703 22:56:38.427404   22400 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 22:56:38.427411   22400 addons.go:69] Setting default-storageclass=true in profile "functional-377836"
	I0703 22:56:38.427432   22400 host.go:66] Checking if "functional-377836" exists ...
	I0703 22:56:38.427442   22400 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-377836"
	I0703 22:56:38.427696   22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:56:38.427716   22400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:56:38.427774   22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:56:38.427805   22400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:56:38.429062   22400 out.go:177] * Verifying Kubernetes components...
	I0703 22:56:38.430289   22400 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0703 22:56:38.442081   22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35731
	I0703 22:56:38.442456   22400 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:56:38.442941   22400 main.go:141] libmachine: Using API Version  1
	I0703 22:56:38.442957   22400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:56:38.443083   22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43101
	I0703 22:56:38.443308   22400 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:56:38.443412   22400 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:56:38.443791   22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:56:38.443823   22400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:56:38.443861   22400 main.go:141] libmachine: Using API Version  1
	I0703 22:56:38.443875   22400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:56:38.444182   22400 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:56:38.444354   22400 main.go:141] libmachine: (functional-377836) Calling .GetState
	I0703 22:56:38.446988   22400 addons.go:234] Setting addon default-storageclass=true in "functional-377836"
	W0703 22:56:38.446998   22400 addons.go:243] addon default-storageclass should already be in state true
	I0703 22:56:38.447023   22400 host.go:66] Checking if "functional-377836" exists ...
	I0703 22:56:38.447366   22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:56:38.447403   22400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:56:38.458158   22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36479
	I0703 22:56:38.458472   22400 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:56:38.458933   22400 main.go:141] libmachine: Using API Version  1
	I0703 22:56:38.458949   22400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:56:38.459232   22400 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:56:38.459403   22400 main.go:141] libmachine: (functional-377836) Calling .GetState
	I0703 22:56:38.460775   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:38.462605   22400 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0703 22:56:38.463958   22400 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 22:56:38.463968   22400 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0703 22:56:38.463983   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:38.465217   22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42569
	I0703 22:56:38.465639   22400 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:56:38.466093   22400 main.go:141] libmachine: Using API Version  1
	I0703 22:56:38.466110   22400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:56:38.466372   22400 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:56:38.466613   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:38.466913   22400 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:56:38.466939   22400 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:56:38.466979   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:38.466999   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:38.467134   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:38.467287   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:38.467425   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:38.467558   22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
	I0703 22:56:38.481017   22400 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37535
	I0703 22:56:38.481392   22400 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:56:38.481794   22400 main.go:141] libmachine: Using API Version  1
	I0703 22:56:38.481801   22400 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:56:38.482100   22400 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:56:38.482261   22400 main.go:141] libmachine: (functional-377836) Calling .GetState
	I0703 22:56:38.483741   22400 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:56:38.483918   22400 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0703 22:56:38.483925   22400 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0703 22:56:38.483935   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
	I0703 22:56:38.486462   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:38.486889   22400 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
	I0703 22:56:38.486913   22400 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
	I0703 22:56:38.487015   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
	I0703 22:56:38.487168   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
	I0703 22:56:38.487292   22400 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
	I0703 22:56:38.487458   22400 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
	I0703 22:56:38.623636   22400 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0703 22:56:38.638008   22400 node_ready.go:35] waiting up to 6m0s for node "functional-377836" to be "Ready" ...
	I0703 22:56:38.640695   22400 node_ready.go:49] node "functional-377836" has status "Ready":"True"
	I0703 22:56:38.640707   22400 node_ready.go:38] duration metric: took 2.678119ms for node "functional-377836" to be "Ready" ...
	I0703 22:56:38.640716   22400 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 22:56:38.645601   22400 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.696961   22400 pod_ready.go:92] pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:38.696975   22400 pod_ready.go:81] duration metric: took 51.363862ms for pod "coredns-7db6d8ff4d-4w94w" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.697000   22400 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:38.787610   22400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0703 22:56:38.805215   22400 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0703 22:56:39.097469   22400 pod_ready.go:92] pod "etcd-functional-377836" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:39.097481   22400 pod_ready.go:81] duration metric: took 400.474207ms for pod "etcd-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:39.097489   22400 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:39.428454   22400 main.go:141] libmachine: Making call to close driver server
	I0703 22:56:39.428467   22400 main.go:141] libmachine: (functional-377836) Calling .Close
	I0703 22:56:39.428570   22400 main.go:141] libmachine: Making call to close driver server
	I0703 22:56:39.428585   22400 main.go:141] libmachine: (functional-377836) Calling .Close
	I0703 22:56:39.428769   22400 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:56:39.428780   22400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:56:39.428787   22400 main.go:141] libmachine: Making call to close driver server
	I0703 22:56:39.428793   22400 main.go:141] libmachine: (functional-377836) Calling .Close
	I0703 22:56:39.428859   22400 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
	I0703 22:56:39.428898   22400 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:56:39.428908   22400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:56:39.428920   22400 main.go:141] libmachine: Making call to close driver server
	I0703 22:56:39.428926   22400 main.go:141] libmachine: (functional-377836) Calling .Close
	I0703 22:56:39.428983   22400 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
	I0703 22:56:39.429008   22400 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:56:39.429017   22400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:56:39.429257   22400 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
	I0703 22:56:39.429300   22400 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:56:39.429327   22400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:56:39.434894   22400 main.go:141] libmachine: Making call to close driver server
	I0703 22:56:39.434902   22400 main.go:141] libmachine: (functional-377836) Calling .Close
	I0703 22:56:39.435141   22400 main.go:141] libmachine: Successfully made call to close driver server
	I0703 22:56:39.435152   22400 main.go:141] libmachine: Making call to close connection to plugin binary
	I0703 22:56:39.435160   22400 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
	I0703 22:56:39.437054   22400 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0703 22:56:39.438180   22400 addons.go:510] duration metric: took 1.010874227s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0703 22:56:39.496870   22400 pod_ready.go:92] pod "kube-apiserver-functional-377836" in "kube-system" namespace has status "Ready":"True"
	I0703 22:56:39.496882   22400 pod_ready.go:81] duration metric: took 399.386622ms for pod "kube-apiserver-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:39.496892   22400 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:40.900223   22400 pod_ready.go:97] node "functional-377836" hosting pod "kube-controller-manager-functional-377836" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-377836": Get "https://192.168.39.219:8441/api/v1/nodes/functional-377836": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:40.900240   22400 pod_ready.go:81] duration metric: took 1.403341811s for pod "kube-controller-manager-functional-377836" in "kube-system" namespace to be "Ready" ...
	E0703 22:56:40.900251   22400 pod_ready.go:66] WaitExtra: waitPodCondition: node "functional-377836" hosting pod "kube-controller-manager-functional-377836" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "functional-377836": Get "https://192.168.39.219:8441/api/v1/nodes/functional-377836": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:40.900274   22400 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pgfqk" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:40.900614   22400 pod_ready.go:97] error getting pod "kube-proxy-pgfqk" in "kube-system" namespace (skipping!): Get "https://192.168.39.219:8441/api/v1/namespaces/kube-system/pods/kube-proxy-pgfqk": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:40.900625   22400 pod_ready.go:81] duration metric: took 344.603µs for pod "kube-proxy-pgfqk" in "kube-system" namespace to be "Ready" ...
	E0703 22:56:40.900634   22400 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-proxy-pgfqk" in "kube-system" namespace (skipping!): Get "https://192.168.39.219:8441/api/v1/namespaces/kube-system/pods/kube-proxy-pgfqk": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:40.900647   22400 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-377836" in "kube-system" namespace to be "Ready" ...
	I0703 22:56:40.901019   22400 pod_ready.go:97] error getting pod "kube-scheduler-functional-377836" in "kube-system" namespace (skipping!): Get "https://192.168.39.219:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-377836": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:40.901029   22400 pod_ready.go:81] duration metric: took 375.908µs for pod "kube-scheduler-functional-377836" in "kube-system" namespace to be "Ready" ...
	E0703 22:56:40.901036   22400 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-377836" in "kube-system" namespace (skipping!): Get "https://192.168.39.219:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-377836": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:40.901049   22400 pod_ready.go:38] duration metric: took 2.260323765s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0703 22:56:40.901063   22400 api_server.go:52] waiting for apiserver process to appear ...
	I0703 22:56:40.901101   22400 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 22:56:40.915384   22400 api_server.go:72] duration metric: took 2.48809742s to wait for apiserver process to appear ...
	I0703 22:56:40.915397   22400 api_server.go:88] waiting for apiserver healthz status ...
	I0703 22:56:40.915413   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:40.915791   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:41.416474   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:41.417020   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:41.915614   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:41.916144   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:42.415740   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:42.416228   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:42.915810   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:42.916339   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:43.415926   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:43.416536   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:43.915865   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:43.916365   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:44.415893   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:44.416346   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:44.915945   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:44.916469   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:45.416109   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:45.416667   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:45.916185   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:45.916736   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:46.416376   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:46.416868   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:46.916492   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:46.917036   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:47.415631   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:47.416181   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:47.915858   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:47.916421   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:48.416007   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:48.416518   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:48.915612   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:48.916268   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:49.415788   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:49.416349   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:49.915877   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:49.916378   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:50.415929   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:50.416416   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:50.916113   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:50.916630   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:51.416252   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:51.416782   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:51.916443   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:51.916990   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:52.415553   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:52.416076   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:52.915592   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:52.916133   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:53.415661   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:53.416196   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:53.915789   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:53.916347   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:54.415882   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:54.416373   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:54.915910   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:54.916411   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:55.416013   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:55.416534   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:55.916169   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:55.916675   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:56.416323   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:56.416822   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:56.916385   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:56.916948   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:57.415563   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:57.416084   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:57.915883   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:57.916426   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:58.416009   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:58.416571   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:58.916205   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:58.916759   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:59.416369   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:59.416850   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:56:59.916478   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:56:59.917019   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:00.415567   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:00.416122   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:00.915512   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:00.916038   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:01.415649   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:01.416155   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:01.915752   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:01.916334   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:02.415911   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:02.416449   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:02.916112   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:02.916631   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:03.416271   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:03.416783   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:03.916501   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:03.917016   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:04.415546   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:04.416015   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:04.915562   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:04.916143   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:05.415743   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:05.416329   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:05.915988   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:05.916506   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:06.416127   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:06.416665   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:06.916307   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:06.916822   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:07.416423   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:07.416949   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:07.915836   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:07.916364   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:08.415959   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:08.416566   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:08.916215   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:08.916736   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:09.416345   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:09.416806   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:09.916488   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:09.917089   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:10.415649   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:10.416181   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:10.916048   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:10.916599   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:11.416237   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:11.416772   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:11.916487   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:11.917045   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:12.415610   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:12.416203   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:12.915748   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:12.916285   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:13.415815   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:13.416416   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:13.915963   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:13.916560   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:14.416149   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:14.416654   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:14.916241   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:14.916800   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:15.416397   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:15.416956   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:15.915531   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:15.916055   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:16.415604   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:16.416168   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:16.915864   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:16.916407   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:17.415951   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:17.416483   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:17.916165   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:17.916769   22400 api_server.go:269] stopped: https://192.168.39.219:8441/healthz: Get "https://192.168.39.219:8441/healthz": dial tcp 192.168.39.219:8441: connect: connection refused
	I0703 22:57:18.416382   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:19.968917   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0703 22:57:19.968935   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0703 22:57:19.968946   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:20.059918   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0703 22:57:20.059939   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0703 22:57:20.416388   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:20.420821   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 22:57:20.420838   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 22:57:20.915476   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:20.920198   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0703 22:57:20.920215   22400 api_server.go:103] status: https://192.168.39.219:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0703 22:57:21.415768   22400 api_server.go:253] Checking apiserver healthz at https://192.168.39.219:8441/healthz ...
	I0703 22:57:21.420174   22400 api_server.go:279] https://192.168.39.219:8441/healthz returned 200:
	ok
	I0703 22:57:21.426044   22400 api_server.go:141] control plane version: v1.30.2
	I0703 22:57:21.426059   22400 api_server.go:131] duration metric: took 40.510656123s to wait for apiserver health ...
	I0703 22:57:21.426066   22400 system_pods.go:43] waiting for kube-system pods to appear ...
	I0703 22:57:21.433352   22400 system_pods.go:59] 7 kube-system pods found
	I0703 22:57:21.433363   22400 system_pods.go:61] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:21.433368   22400 system_pods.go:61] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:21.433373   22400 system_pods.go:61] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:21.433380   22400 system_pods.go:61] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:21.433384   22400 system_pods.go:61] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:21.433388   22400 system_pods.go:61] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:21.433392   22400 system_pods.go:61] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:21.433397   22400 system_pods.go:74] duration metric: took 7.325714ms to wait for pod list to return data ...
	I0703 22:57:21.433403   22400 default_sa.go:34] waiting for default service account to be created ...
	I0703 22:57:21.435279   22400 default_sa.go:45] found service account: "default"
	I0703 22:57:21.435286   22400 default_sa.go:55] duration metric: took 1.87962ms for default service account to be created ...
	I0703 22:57:21.435292   22400 system_pods.go:116] waiting for k8s-apps to be running ...
	I0703 22:57:21.439968   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:21.439976   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:21.439979   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:21.439982   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:21.439986   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:21.439988   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:21.439992   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:21.439995   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:21.440006   22400 retry.go:31] will retry after 205.929915ms: missing components: kube-apiserver
	I0703 22:57:21.651686   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:21.651699   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:21.651704   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:21.651707   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:21.651710   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:21.651713   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:21.651715   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:21.651718   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:21.651726   22400 retry.go:31] will retry after 309.497055ms: missing components: kube-apiserver
	I0703 22:57:21.968114   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:21.968128   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:21.968132   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:21.968135   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:21.968138   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:21.968141   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:21.968144   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:21.968147   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:21.968157   22400 retry.go:31] will retry after 343.461998ms: missing components: kube-apiserver
	I0703 22:57:22.317769   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:22.317781   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:22.317786   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:22.317789   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:22.317792   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:22.317795   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:22.317798   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:22.317801   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:22.317811   22400 retry.go:31] will retry after 522.86021ms: missing components: kube-apiserver
	I0703 22:57:22.846907   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:22.846919   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:22.846923   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:22.846927   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:22.846929   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:22.846932   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:22.846935   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:22.846937   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:22.846947   22400 retry.go:31] will retry after 479.921307ms: missing components: kube-apiserver
	I0703 22:57:23.333906   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:23.333918   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:23.333922   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:23.333926   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:23.333928   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:23.333931   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:23.333934   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:23.333937   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:23.333947   22400 retry.go:31] will retry after 736.801996ms: missing components: kube-apiserver
	I0703 22:57:24.077214   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:24.077229   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:24.077234   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:24.077238   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:24.077241   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:24.077245   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:24.077249   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:24.077253   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:24.077267   22400 retry.go:31] will retry after 964.625599ms: missing components: kube-apiserver
	I0703 22:57:25.047908   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:25.047927   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:25.047933   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:25.047938   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:25.047943   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:25.047947   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:25.047952   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:25.047957   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:25.047970   22400 retry.go:31] will retry after 1.454354913s: missing components: kube-apiserver
	I0703 22:57:26.508057   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:26.508069   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:26.508073   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:26.508076   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:26.508079   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:26.508082   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:26.508085   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:26.508087   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:26.508097   22400 retry.go:31] will retry after 1.471134788s: missing components: kube-apiserver
	I0703 22:57:27.988943   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:27.988957   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:27.988960   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:27.988964   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:27.988967   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:27.988969   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:27.988972   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:27.988975   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:27.988984   22400 retry.go:31] will retry after 1.778603948s: missing components: kube-apiserver
	I0703 22:57:29.772319   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:29.772331   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:29.772335   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:29.772338   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:29.772341   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:29.772344   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:29.772347   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:29.772350   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:29.772360   22400 retry.go:31] will retry after 2.398104912s: missing components: kube-apiserver
	I0703 22:57:32.176857   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:32.176870   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:32.176874   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:32.176877   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:32.176880   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:32.176883   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:32.176886   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:32.176889   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:32.176899   22400 retry.go:31] will retry after 2.300708214s: missing components: kube-apiserver
	I0703 22:57:34.483251   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:34.483263   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:34.483267   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:34.483271   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:34.483274   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:34.483277   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:34.483280   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:34.483283   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:34.483293   22400 retry.go:31] will retry after 2.770844413s: missing components: kube-apiserver
	I0703 22:57:37.261017   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:37.261028   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:37.261032   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:37.261035   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:37.261038   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:37.261040   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:37.261043   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:37.261046   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:37.261055   22400 retry.go:31] will retry after 5.182347531s: missing components: kube-apiserver
	I0703 22:57:42.452925   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:42.452938   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:42.452941   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:42.452945   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:42.452947   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:42.452950   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:42.452953   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:42.452956   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:42.452966   22400 retry.go:31] will retry after 6.155487281s: missing components: kube-apiserver
	I0703 22:57:48.614848   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:48.614863   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:48.614867   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:48.614870   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Pending
	I0703 22:57:48.614872   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:48.614875   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:48.614878   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:48.614881   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:48.614893   22400 retry.go:31] will retry after 7.232822524s: missing components: kube-apiserver
	I0703 22:57:55.853881   22400 system_pods.go:86] 7 kube-system pods found
	I0703 22:57:55.853898   22400 system_pods.go:89] "coredns-7db6d8ff4d-4w94w" [f3801bb6-4310-419e-81d4-867823def4ec] Running
	I0703 22:57:55.853904   22400 system_pods.go:89] "etcd-functional-377836" [9e11e64f-9978-4fe0-8346-cb2c9a913b63] Running
	I0703 22:57:55.853915   22400 system_pods.go:89] "kube-apiserver-functional-377836" [22b983f4-c7b8-492c-bac2-90d4b68c0da4] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0703 22:57:55.853922   22400 system_pods.go:89] "kube-controller-manager-functional-377836" [1d054f92-1573-4ab2-94a9-7e0c7336adbc] Running
	I0703 22:57:55.853928   22400 system_pods.go:89] "kube-proxy-pgfqk" [55d3c679-a05f-4dad-bd04-ab0e0b51d0b1] Running
	I0703 22:57:55.853933   22400 system_pods.go:89] "kube-scheduler-functional-377836" [ebd41990-b874-4ee4-a670-21c271b39c4e] Running
	I0703 22:57:55.853938   22400 system_pods.go:89] "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
	I0703 22:57:55.853945   22400 system_pods.go:126] duration metric: took 34.418648577s to wait for k8s-apps to be running ...
	I0703 22:57:55.853951   22400 system_svc.go:44] waiting for kubelet service to be running ....
	I0703 22:57:55.854000   22400 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 22:57:55.871280   22400 system_svc.go:56] duration metric: took 17.314505ms WaitForService to wait for kubelet
	I0703 22:57:55.871293   22400 kubeadm.go:576] duration metric: took 1m17.444010838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0703 22:57:55.871310   22400 node_conditions.go:102] verifying NodePressure condition ...
	I0703 22:57:55.874516   22400 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0703 22:57:55.874526   22400 node_conditions.go:123] node cpu capacity is 2
	I0703 22:57:55.874535   22400 node_conditions.go:105] duration metric: took 3.22147ms to run NodePressure ...
	I0703 22:57:55.874544   22400 start.go:240] waiting for startup goroutines ...
	I0703 22:57:55.874549   22400 start.go:245] waiting for cluster config update ...
	I0703 22:57:55.874558   22400 start.go:254] writing updated cluster config ...
	I0703 22:57:55.874806   22400 ssh_runner.go:195] Run: rm -f paused
	I0703 22:57:55.926722   22400 start.go:600] kubectl: 1.30.2, cluster: 1.30.2 (minor skew: 0)
	I0703 22:57:55.928522   22400 out.go:177] * Done! kubectl is now configured to use "functional-377836" cluster and "default" namespace by default
	
	
	==> Docker <==
	Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.641548683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.643510933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 22:56:25 functional-377836 cri-dockerd[6427]: time="2024-07-03T22:56:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fa142dde95510fec3b78bc7ea9b968256055dcc2f08d0a7f358e091bd954c5ee/resolv.conf as [nameserver 192.168.122.1]"
	Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.938106048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.938240838Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.938321323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 22:56:25 functional-377836 dockerd[6155]: time="2024-07-03T22:56:25.938559611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 22:57:09 functional-377836 dockerd[6149]: time="2024-07-03T22:57:09.845564280Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=129792b10c13bc54890513a3774d03385ca7b10f4078c055ca2fd389dabfb25e
	Jul 03 22:57:09 functional-377836 dockerd[6149]: time="2024-07-03T22:57:09.895345942Z" level=info msg="ignoring event" container=129792b10c13bc54890513a3774d03385ca7b10f4078c055ca2fd389dabfb25e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.895771635Z" level=info msg="shim disconnected" id=129792b10c13bc54890513a3774d03385ca7b10f4078c055ca2fd389dabfb25e namespace=moby
	Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.895915061Z" level=warning msg="cleaning up after shim disconnected" id=129792b10c13bc54890513a3774d03385ca7b10f4078c055ca2fd389dabfb25e namespace=moby
	Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.895928991Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 03 22:57:09 functional-377836 dockerd[6149]: time="2024-07-03T22:57:09.969502999Z" level=info msg="ignoring event" container=6e039bed70198a674f9d1014dcf4c4bd6c1474aa1ac8229f4ff884e074fecfe2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.969745518Z" level=info msg="shim disconnected" id=6e039bed70198a674f9d1014dcf4c4bd6c1474aa1ac8229f4ff884e074fecfe2 namespace=moby
	Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.970815736Z" level=warning msg="cleaning up after shim disconnected" id=6e039bed70198a674f9d1014dcf4c4bd6c1474aa1ac8229f4ff884e074fecfe2 namespace=moby
	Jul 03 22:57:09 functional-377836 dockerd[6155]: time="2024-07-03T22:57:09.970846752Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 03 22:57:17 functional-377836 dockerd[6155]: time="2024-07-03T22:57:17.944805774Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 03 22:57:17 functional-377836 dockerd[6155]: time="2024-07-03T22:57:17.945014802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 03 22:57:17 functional-377836 dockerd[6155]: time="2024-07-03T22:57:17.945027320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 22:57:17 functional-377836 dockerd[6155]: time="2024-07-03T22:57:17.945234344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 22:57:18 functional-377836 cri-dockerd[6427]: time="2024-07-03T22:57:18Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4b240ad69522b33b3e25233635b92d01c1a3b328290d69f6171bddc5924a8344/resolv.conf as [nameserver 192.168.122.1]"
	Jul 03 22:57:18 functional-377836 dockerd[6155]: time="2024-07-03T22:57:18.104833953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 03 22:57:18 functional-377836 dockerd[6155]: time="2024-07-03T22:57:18.105215502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 03 22:57:18 functional-377836 dockerd[6155]: time="2024-07-03T22:57:18.105238330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 03 22:57:18 functional-377836 dockerd[6155]: time="2024-07-03T22:57:18.105427227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	0790dd5ddc5ea       56ce0fd9fb532       38 seconds ago       Running             kube-apiserver            0                   4b240ad69522b       kube-apiserver-functional-377836
	3640a066b0712       cbb01a7bd410d       About a minute ago   Running             coredns                   3                   fa142dde95510       coredns-7db6d8ff4d-4w94w
	ee9b7d68186f2       53c535741fb44       About a minute ago   Running             kube-proxy                3                   b4b24202e8c4a       kube-proxy-pgfqk
	7917e365b1481       6e38f40d628db       About a minute ago   Running             storage-provisioner       4                   e7aa3982a9066       storage-provisioner
	f2cde61576666       7820c83aa1394       About a minute ago   Running             kube-scheduler            3                   2ce690ad303e2       kube-scheduler-functional-377836
	8991ec818d243       3861cfcd7c04c       About a minute ago   Running             etcd                      3                   843b20a3a87d9       etcd-functional-377836
	2be910c8e295a       e874818b3caac       About a minute ago   Running             kube-controller-manager   3                   b7387557faedf       kube-controller-manager-functional-377836
	6abeb2402f6db       cbb01a7bd410d       About a minute ago   Created             coredns                   2                   b406ab73e5d2f       coredns-7db6d8ff4d-4w94w
	f9863ca2c40f6       e874818b3caac       About a minute ago   Created             kube-controller-manager   2                   fb0a93e6301f9       kube-controller-manager-functional-377836
	08c3c84948f0a       53c535741fb44       About a minute ago   Created             kube-proxy                2                   d73ef8b96e2cb       kube-proxy-pgfqk
	71c0b16f3679a       3861cfcd7c04c       About a minute ago   Created             etcd                      2                   d19de58b969dd       etcd-functional-377836
	4aa40d2e115b4       7820c83aa1394       About a minute ago   Created             kube-scheduler            2                   757aac77b2425       kube-scheduler-functional-377836
	3720e138f218e       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   3180d83316a48       storage-provisioner
	
	
	==> coredns [3640a066b071] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:45296 - 44991 "HINFO IN 8945907258705290674.6296398039332437337. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.04696491s
	
	
	==> coredns [6abeb2402f6d] <==
	
	
	==> describe nodes <==
	Name:               functional-377836
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-377836
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=517d26970121680bead3b5e355569714a6e9cc9e
	                    minikube.k8s.io/name=functional-377836
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_03T22_54_20_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 03 Jul 2024 22:54:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-377836
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 03 Jul 2024 22:57:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 03 Jul 2024 22:57:25 +0000   Wed, 03 Jul 2024 22:57:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 03 Jul 2024 22:57:25 +0000   Wed, 03 Jul 2024 22:57:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 03 Jul 2024 22:57:25 +0000   Wed, 03 Jul 2024 22:57:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 03 Jul 2024 22:57:25 +0000   Wed, 03 Jul 2024 22:57:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.219
	  Hostname:    functional-377836
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 d0f757d88bb549828598c5bc7b79d26e
	  System UUID:                d0f757d8-8bb5-4982-8598-c5bc7b79d26e
	  Boot ID:                    fae7a8c5-c2c5-45e8-b79d-40bf2a5ee916
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.2
	  Kube-Proxy Version:         v1.30.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-4w94w                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m23s
	  kube-system                 etcd-functional-377836                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-apiserver-functional-377836             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         36s
	  kube-system                 kube-controller-manager-functional-377836    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 kube-proxy-pgfqk                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	  kube-system                 kube-scheduler-functional-377836             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m37s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 90s                    kube-proxy       
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  Starting                 3m21s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  3m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m43s (x8 over 3m43s)  kubelet          Node functional-377836 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m43s (x8 over 3m43s)  kubelet          Node functional-377836 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m43s (x7 over 3m43s)  kubelet          Node functional-377836 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     3m37s                  kubelet          Node functional-377836 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m37s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m37s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m37s                  kubelet          Node functional-377836 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m37s                  kubelet          Node functional-377836 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                3m35s                  kubelet          Node functional-377836 status is now: NodeReady
	  Normal  RegisteredNode           3m24s                  node-controller  Node functional-377836 event: Registered Node functional-377836 in Controller
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node functional-377836 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node functional-377836 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s (x7 over 2m22s)  kubelet          Node functional-377836 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m5s                   node-controller  Node functional-377836 event: Registered Node functional-377836 in Controller
	  Normal  Starting                 97s                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  97s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  96s (x8 over 97s)      kubelet          Node functional-377836 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s (x8 over 97s)      kubelet          Node functional-377836 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s (x7 over 97s)      kubelet          Node functional-377836 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           80s                    node-controller  Node functional-377836 event: Registered Node functional-377836 in Controller
	  Normal  NodeNotReady             35s                    node-controller  Node functional-377836 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.164742] systemd-fstab-generator[4014]: Ignoring "noauto" option for root device
	[  +0.457412] systemd-fstab-generator[4179]: Ignoring "noauto" option for root device
	[  +2.012633] systemd-fstab-generator[4301]: Ignoring "noauto" option for root device
	[  +0.063428] kauditd_printk_skb: 137 callbacks suppressed
	[  +5.477782] kauditd_printk_skb: 52 callbacks suppressed
	[ +11.712346] kauditd_printk_skb: 32 callbacks suppressed
	[  +1.066440] systemd-fstab-generator[5212]: Ignoring "noauto" option for root device
	[  +5.075303] kauditd_printk_skb: 14 callbacks suppressed
	[Jul 3 22:56] systemd-fstab-generator[5678]: Ignoring "noauto" option for root device
	[  +0.326346] systemd-fstab-generator[5712]: Ignoring "noauto" option for root device
	[  +0.170294] systemd-fstab-generator[5724]: Ignoring "noauto" option for root device
	[  +0.162508] systemd-fstab-generator[5738]: Ignoring "noauto" option for root device
	[  +5.206359] kauditd_printk_skb: 89 callbacks suppressed
	[  +7.689430] systemd-fstab-generator[6375]: Ignoring "noauto" option for root device
	[  +0.130438] systemd-fstab-generator[6387]: Ignoring "noauto" option for root device
	[  +0.131761] systemd-fstab-generator[6399]: Ignoring "noauto" option for root device
	[  +0.149993] systemd-fstab-generator[6414]: Ignoring "noauto" option for root device
	[  +0.464102] systemd-fstab-generator[6583]: Ignoring "noauto" option for root device
	[  +1.509332] kauditd_printk_skb: 185 callbacks suppressed
	[  +1.464439] systemd-fstab-generator[7430]: Ignoring "noauto" option for root device
	[  +5.534761] kauditd_printk_skb: 61 callbacks suppressed
	[ +11.664602] kauditd_printk_skb: 26 callbacks suppressed
	[  +1.641291] systemd-fstab-generator[8463]: Ignoring "noauto" option for root device
	[Jul 3 22:57] kauditd_printk_skb: 16 callbacks suppressed
	[ +35.272072] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [71c0b16f3679] <==
	
	
	==> etcd [8991ec818d24] <==
	{"level":"info","ts":"2024-07-03T22:56:21.517942Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-03T22:56:21.518068Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-03T22:56:21.518435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 switched to configuration voters=(2930583753691095924)"}
	{"level":"info","ts":"2024-07-03T22:56:21.520954Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","added-peer-id":"28ab8665a749e374","added-peer-peer-urls":["https://192.168.39.219:2380"]}
	{"level":"info","ts":"2024-07-03T22:56:21.521221Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"14fc06d09ccfd789","local-member-id":"28ab8665a749e374","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T22:56:21.523938Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-03T22:56:21.522057Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-03T22:56:21.524683Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"28ab8665a749e374","initial-advertise-peer-urls":["https://192.168.39.219:2380"],"listen-peer-urls":["https://192.168.39.219:2380"],"advertise-client-urls":["https://192.168.39.219:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.219:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-03T22:56:21.524725Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-03T22:56:21.522081Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-07-03T22:56:21.525073Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.219:2380"}
	{"level":"info","ts":"2024-07-03T22:56:22.961398Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-03T22:56:22.961523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-03T22:56:22.961643Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgPreVoteResp from 28ab8665a749e374 at term 3"}
	{"level":"info","ts":"2024-07-03T22:56:22.961722Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became candidate at term 4"}
	{"level":"info","ts":"2024-07-03T22:56:22.961742Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 received MsgVoteResp from 28ab8665a749e374 at term 4"}
	{"level":"info","ts":"2024-07-03T22:56:22.961804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"28ab8665a749e374 became leader at term 4"}
	{"level":"info","ts":"2024-07-03T22:56:22.961847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 28ab8665a749e374 elected leader 28ab8665a749e374 at term 4"}
	{"level":"info","ts":"2024-07-03T22:56:22.967489Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"28ab8665a749e374","local-member-attributes":"{Name:functional-377836 ClientURLs:[https://192.168.39.219:2379]}","request-path":"/0/members/28ab8665a749e374/attributes","cluster-id":"14fc06d09ccfd789","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-03T22:56:22.967501Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T22:56:22.967523Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-03T22:56:22.967784Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-03T22:56:22.968473Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-03T22:56:22.970485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.219:2379"}
	{"level":"info","ts":"2024-07-03T22:56:22.970547Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 22:57:56 up 4 min,  0 users,  load average: 1.12, 0.79, 0.34
	Linux functional-377836 5.10.207 #1 SMP Tue Jul 2 18:53:17 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0790dd5ddc5e] <==
	I0703 22:57:19.992224       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0703 22:57:19.992276       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0703 22:57:19.992346       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0703 22:57:19.992678       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0703 22:57:19.992981       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0703 22:57:20.088174       1 shared_informer.go:320] Caches are synced for configmaps
	I0703 22:57:20.089350       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0703 22:57:20.090290       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0703 22:57:20.090403       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0703 22:57:20.091176       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0703 22:57:20.104075       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0703 22:57:20.104349       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0703 22:57:20.104473       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0703 22:57:20.104740       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0703 22:57:20.105382       1 aggregator.go:165] initial CRD sync complete...
	I0703 22:57:20.105413       1 autoregister_controller.go:141] Starting autoregister controller
	I0703 22:57:20.105418       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0703 22:57:20.105424       1 cache.go:39] Caches are synced for autoregister controller
	I0703 22:57:20.115591       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0703 22:57:20.115906       1 policy_source.go:224] refreshing policies
	I0703 22:57:20.167545       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0703 22:57:20.892319       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0703 22:57:21.134719       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.219]
	I0703 22:57:21.136319       1 controller.go:615] quota admission added evaluator for: endpoints
	I0703 22:57:21.140457       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [2be910c8e295] <==
	E0703 22:57:20.016164       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodTemplate: unknown (get podtemplates)
	E0703 22:57:20.016204       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)
	E0703 22:57:20.016226       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ResourceQuota: unknown (get resourcequotas)
	E0703 22:57:20.016241       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)
	E0703 22:57:20.016256       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.IngressClass: unknown (get ingressclasses.networking.k8s.io)
	E0703 22:57:20.016293       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RoleBinding: unknown (get rolebindings.rbac.authorization.k8s.io)
	E0703 22:57:20.016332       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v2.HorizontalPodAutoscaler: unknown (get horizontalpodautoscalers.autoscaling)
	E0703 22:57:20.016352       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: unknown (get statefulsets.apps)
	E0703 22:57:20.016367       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)
	E0703 22:57:20.016380       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: unknown
	E0703 22:57:20.016391       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0703 22:57:20.035317       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CronJob: unknown (get cronjobs.batch)
	E0703 22:57:20.035656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Ingress: unknown (get ingresses.networking.k8s.io)
	E0703 22:57:20.037202       1 reflector.go:150] k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: unknown
	I0703 22:57:21.644036       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0703 22:57:21.661961       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-377836" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-377836\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-377836, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 80bc54ed-3e0b-40c2-9e36-5889e4c30b1d, UID in object meta: 22b983f4-c7b8-492c-bac2-90d4b68c0da4"
	E0703 22:57:21.723835       1 node_lifecycle_controller.go:753] unable to mark all pods NotReady on node functional-377836: Operation cannot be fulfilled on pods "kube-apiserver-functional-377836": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-377836, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 80bc54ed-3e0b-40c2-9e36-5889e4c30b1d, UID in object meta: 22b983f4-c7b8-492c-bac2-90d4b68c0da4; queuing for retry
	I0703 22:57:21.724443       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	E0703 22:57:26.729941       1 node_lifecycle_controller.go:973] "Error updating node" err="Operation cannot be fulfilled on nodes \"functional-377836\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="functional-377836"
	I0703 22:57:26.752719       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0703 22:57:31.637442       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="169.575µs"
	I0703 22:57:45.183469       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.768982ms"
	I0703 22:57:45.184067       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="205.96µs"
	I0703 22:57:49.999979       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.278403ms"
	I0703 22:57:50.000063       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.551µs"
	
	
	==> kube-controller-manager [f9863ca2c40f] <==
	
	
	==> kube-proxy [08c3c84948f0] <==
	
	
	==> kube-proxy [ee9b7d68186f] <==
	I0703 22:56:25.783083       1 server_linux.go:69] "Using iptables proxy"
	I0703 22:56:25.816323       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.39.219"]
	I0703 22:56:25.867693       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0703 22:56:25.867754       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0703 22:56:25.867828       1 server_linux.go:165] "Using iptables Proxier"
	I0703 22:56:25.871028       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0703 22:56:25.871524       1 server.go:872] "Version info" version="v1.30.2"
	I0703 22:56:25.871839       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 22:56:25.873442       1 config.go:192] "Starting service config controller"
	I0703 22:56:25.875815       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0703 22:56:25.873606       1 config.go:101] "Starting endpoint slice config controller"
	I0703 22:56:25.875852       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0703 22:56:25.874138       1 config.go:319] "Starting node config controller"
	I0703 22:56:25.875916       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0703 22:56:25.976460       1 shared_informer.go:320] Caches are synced for node config
	I0703 22:56:25.976509       1 shared_informer.go:320] Caches are synced for service config
	I0703 22:56:25.976561       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4aa40d2e115b] <==
	
	
	==> kube-scheduler [f2cde6157666] <==
	I0703 22:56:22.295321       1 serving.go:380] Generated self-signed cert in-memory
	W0703 22:56:24.184821       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0703 22:56:24.185128       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0703 22:56:24.185332       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0703 22:56:24.185454       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0703 22:56:24.281556       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.2"
	I0703 22:56:24.281807       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0703 22:56:24.283780       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0703 22:56:24.284140       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0703 22:56:24.287951       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0703 22:56:24.284158       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0703 22:56:24.388678       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0703 22:57:19.981106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: unknown (get replicationcontrollers)
	E0703 22:57:19.983405       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: unknown (get storageclasses.storage.k8s.io)
	E0703 22:57:19.983693       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: unknown (get poddisruptionbudgets.policy)
	
	
	==> kubelet <==
	Jul 03 22:57:10 functional-377836 kubelet[7437]: I0703 22:57:10.268552    7437 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"59d9adb16464a253be5b23867f9bc024882c1b7d23cd5a7f0476a54e5cfb47c5"} err="failed to get container status \"59d9adb16464a253be5b23867f9bc024882c1b7d23cd5a7f0476a54e5cfb47c5\": rpc error: code = Unknown desc = Error response from daemon: No such container: 59d9adb16464a253be5b23867f9bc024882c1b7d23cd5a7f0476a54e5cfb47c5"
	Jul 03 22:57:11 functional-377836 kubelet[7437]: E0703 22:57:11.198062    7437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused" interval="7s"
	Jul 03 22:57:11 functional-377836 kubelet[7437]: I0703 22:57:11.853091    7437 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e880399f9148ced2c133b53d7537abc" path="/var/lib/kubelet/pods/1e880399f9148ced2c133b53d7537abc/volumes"
	Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.300475    7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?resourceVersion=0&timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
	Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.301674    7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
	Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.302277    7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
	Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.302805    7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
	Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.303359    7437 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"functional-377836\": Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused"
	Jul 03 22:57:15 functional-377836 kubelet[7437]: E0703 22:57:15.303417    7437 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 03 22:57:17 functional-377836 kubelet[7437]: I0703 22:57:17.848505    7437 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-377836" podUID="80bc54ed-3e0b-40c2-9e36-5889e4c30b1d"
	Jul 03 22:57:17 functional-377836 kubelet[7437]: E0703 22:57:17.849800    7437 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-377836\": dial tcp 192.168.39.219:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-377836"
	Jul 03 22:57:18 functional-377836 kubelet[7437]: E0703 22:57:18.023390    7437 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/events\": dial tcp 192.168.39.219:8441: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-functional-377836.17ded6081c17d2d2  kube-system    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-functional-377836,UID:af96a50731406e4b1662571b5822a697,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"registry.k8s.io/kube-apiserver:v1.30.2\" already present on machine,Source:EventSource{Component:kubelet,Host:functional-377836,},FirstTimestamp:2024-07-03 22:57:18.021513938 +0000 UTC m=+58.288217776,LastTimestamp:2024-07-03 22:57:18.021513938 +0000 UTC m=+58.288217776,Count:1,Type:Normal,EventTime:0001
-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:functional-377836,}"
	Jul 03 22:57:18 functional-377836 kubelet[7437]: E0703 22:57:18.199716    7437 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-377836?timeout=10s\": dial tcp 192.168.39.219:8441: connect: connection refused" interval="7s"
	Jul 03 22:57:18 functional-377836 kubelet[7437]: I0703 22:57:18.311568    7437 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-377836" podUID="80bc54ed-3e0b-40c2-9e36-5889e4c30b1d"
	Jul 03 22:57:19 functional-377836 kubelet[7437]: E0703 22:57:19.880047    7437 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 03 22:57:19 functional-377836 kubelet[7437]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 03 22:57:19 functional-377836 kubelet[7437]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 03 22:57:19 functional-377836 kubelet[7437]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 03 22:57:19 functional-377836 kubelet[7437]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 03 22:57:19 functional-377836 kubelet[7437]: E0703 22:57:19.980647    7437 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: unknown (get configmaps)
	Jul 03 22:57:20 functional-377836 kubelet[7437]: I0703 22:57:20.158957    7437 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-377836"
	Jul 03 22:57:20 functional-377836 kubelet[7437]: I0703 22:57:20.184405    7437 status_manager.go:877] "Failed to update status for pod" pod="kube-system/kube-apiserver-functional-377836" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"80bc54ed-3e0b-40c2-9e36-5889e4c30b1d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2024-07-03T22:57:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2024-07-03T22:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2024-07-03T22:57:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver]
\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"docker://0790dd5ddc5ea977a68ed1752c2402bd2edd431104d0d2889326b8b61e057862\\\",\\\"image\\\":\\\"registry.k8s.io/kube-apiserver:v1.30.2\\\",\\\"imageID\\\":\\\"docker-pullable://registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2024-07-03T22:57:18Z\\\"}}}]}}\" for pod \"kube-system\"/\"kube-apiserver-functional-377836\": Pod \"kube-apiserver-functional-377836\" is invalid: metadata.uid: Invalid value: \"80bc54ed-3e0b-40c2-9e36-5889e4c30b1d\": field is immutable"
	Jul 03 22:57:20 functional-377836 kubelet[7437]: I0703 22:57:20.327504    7437 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-377836" podUID="80bc54ed-3e0b-40c2-9e36-5889e4c30b1d"
	Jul 03 22:57:27 functional-377836 kubelet[7437]: I0703 22:57:27.853490    7437 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-377836" podUID="80bc54ed-3e0b-40c2-9e36-5889e4c30b1d"
	Jul 03 22:57:49 functional-377836 kubelet[7437]: I0703 22:57:49.886832    7437 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-functional-377836" podStartSLOduration=29.886812599 podStartE2EDuration="29.886812599s" podCreationTimestamp="2024-07-03 22:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-03 22:57:48.692934581 +0000 UTC m=+88.959638428" watchObservedRunningTime="2024-07-03 22:57:49.886812599 +0000 UTC m=+90.153516442"
	
	
	==> storage-provisioner [3720e138f218] <==
	I0703 22:55:57.286018       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0703 22:55:57.298345       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0703 22:55:57.300074       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [7917e365b148] <==
	I0703 22:56:25.485591       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0703 22:56:25.521715       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0703 22:56:25.521939       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	E0703 22:56:39.900004       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:56:42.919827       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:56:46.569448       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:56:48.728324       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:56:51.105471       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:56:53.338960       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:56:56.061793       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:56:59.299057       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:57:03.253191       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:57:05.768332       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:57:08.682577       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:57:11.447330       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:57:14.574234       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:57:17.254538       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	E0703 22:57:19.959223       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0703 22:57:23.604352       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0703 22:57:23.604823       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad933137-5c62-417f-8f1f-2e28493beebc", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-377836_4acc0783-8e29-4d43-b1fc-96eb83434b04 became leader
	I0703 22:57:23.604973       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-377836_4acc0783-8e29-4d43-b1fc-96eb83434b04!
	I0703 22:57:23.705324       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-377836_4acc0783-8e29-4d43-b1fc-96eb83434b04!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-377836 -n functional-377836
helpers_test.go:261: (dbg) Run:  kubectl --context functional-377836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (1.58s)

                                                
                                    

Test pass (309/341)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 14.88
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.12
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.30.2/json-events 4.03
13 TestDownloadOnly/v1.30.2/preload-exists 0
17 TestDownloadOnly/v1.30.2/LogsDuration 0.05
18 TestDownloadOnly/v1.30.2/DeleteAll 0.12
19 TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds 0.11
21 TestBinaryMirror 0.53
22 TestOffline 108.37
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 231.47
29 TestAddons/parallel/Registry 16.57
30 TestAddons/parallel/Ingress 22.88
31 TestAddons/parallel/InspektorGadget 12.16
32 TestAddons/parallel/MetricsServer 5.85
33 TestAddons/parallel/HelmTiller 11.98
35 TestAddons/parallel/CSI 42.7
36 TestAddons/parallel/Headlamp 13.61
37 TestAddons/parallel/CloudSpanner 6.55
38 TestAddons/parallel/LocalPath 15.08
39 TestAddons/parallel/NvidiaDevicePlugin 6.5
40 TestAddons/parallel/Yakd 6.01
41 TestAddons/parallel/Volcano 44.91
44 TestAddons/serial/GCPAuth/Namespaces 0.11
45 TestAddons/StoppedEnableDisable 13.53
46 TestCertOptions 114.4
47 TestCertExpiration 341.64
48 TestDockerFlags 103.09
49 TestForceSystemdFlag 98.38
50 TestForceSystemdEnv 71.54
52 TestKVMDriverInstallOrUpdate 5.52
56 TestErrorSpam/setup 51.43
57 TestErrorSpam/start 0.32
58 TestErrorSpam/status 0.72
59 TestErrorSpam/pause 1.22
60 TestErrorSpam/unpause 1.23
61 TestErrorSpam/stop 6.43
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 104.66
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.53
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.14
73 TestFunctional/serial/CacheCmd/cache/add_local 1.23
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
75 TestFunctional/serial/CacheCmd/cache/list 0.04
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.21
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.08
78 TestFunctional/serial/CacheCmd/cache/delete 0.08
79 TestFunctional/serial/MinikubeKubectlCmd 0.1
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 115.46
83 TestFunctional/serial/LogsCmd 0.96
84 TestFunctional/serial/LogsFileCmd 0.93
85 TestFunctional/serial/InvalidService 4.32
87 TestFunctional/parallel/ConfigCmd 0.33
88 TestFunctional/parallel/DashboardCmd 13.65
89 TestFunctional/parallel/DryRun 0.24
90 TestFunctional/parallel/InternationalLanguage 0.13
91 TestFunctional/parallel/StatusCmd 0.72
95 TestFunctional/parallel/ServiceCmdConnect 28.43
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 54.07
99 TestFunctional/parallel/SSHCmd 0.34
100 TestFunctional/parallel/CpCmd 1.14
101 TestFunctional/parallel/MySQL 31.81
102 TestFunctional/parallel/FileSync 0.19
103 TestFunctional/parallel/CertSync 1.38
107 TestFunctional/parallel/NodeLabels 0.06
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
111 TestFunctional/parallel/License 0.16
112 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
122 TestFunctional/parallel/Version/short 0.05
123 TestFunctional/parallel/Version/components 0.72
124 TestFunctional/parallel/UpdateContextCmd/no_changes 0.09
125 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
126 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.09
127 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
128 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
129 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
130 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
131 TestFunctional/parallel/ImageCommands/ImageBuild 2.68
132 TestFunctional/parallel/ImageCommands/Setup 1.35
133 TestFunctional/parallel/ProfileCmd/profile_list 0.31
134 TestFunctional/parallel/DockerEnv/bash 0.73
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
136 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.1
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.74
139 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.49
140 TestFunctional/parallel/ImageCommands/ImageRemove 0.47
141 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.69
142 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.63
143 TestFunctional/parallel/ServiceCmd/DeployApp 16.29
144 TestFunctional/parallel/MountCmd/any-port 7.25
145 TestFunctional/parallel/ServiceCmd/List 1.31
146 TestFunctional/parallel/ServiceCmd/JSONOutput 1.25
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.28
148 TestFunctional/parallel/ServiceCmd/Format 0.35
149 TestFunctional/parallel/MountCmd/specific-port 1.69
150 TestFunctional/parallel/ServiceCmd/URL 0.3
151 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
152 TestFunctional/delete_addon-resizer_images 0.07
153 TestFunctional/delete_my-image_image 0.01
154 TestFunctional/delete_minikube_cached_images 0.01
155 TestGvisorAddon 222.63
158 TestMultiControlPlane/serial/StartCluster 211.98
159 TestMultiControlPlane/serial/DeployApp 5.07
160 TestMultiControlPlane/serial/PingHostFromPods 1.23
161 TestMultiControlPlane/serial/AddWorkerNode 52.95
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.83
164 TestMultiControlPlane/serial/CopyFile 12.34
165 TestMultiControlPlane/serial/StopSecondaryNode 13.93
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.61
167 TestMultiControlPlane/serial/RestartSecondaryNode 41.45
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.81
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 229.04
170 TestMultiControlPlane/serial/DeleteSecondaryNode 8.02
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.6
172 TestMultiControlPlane/serial/StopCluster 39.05
173 TestMultiControlPlane/serial/RestartCluster 156.45
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.6
175 TestMultiControlPlane/serial/AddSecondaryNode 76.07
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
179 TestImageBuild/serial/Setup 51.84
180 TestImageBuild/serial/NormalBuild 1.65
181 TestImageBuild/serial/BuildWithBuildArg 0.99
182 TestImageBuild/serial/BuildWithDockerIgnore 0.37
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.28
187 TestJSONOutput/start/Command 60.57
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.61
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.57
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.59
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.18
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 216.82
219 TestMountStart/serial/StartWithMountFirst 28.02
220 TestMountStart/serial/VerifyMountFirst 0.35
221 TestMountStart/serial/StartWithMountSecond 31.98
222 TestMountStart/serial/VerifyMountSecond 0.38
223 TestMountStart/serial/DeleteFirst 0.67
224 TestMountStart/serial/VerifyMountPostDelete 0.35
225 TestMountStart/serial/Stop 2.26
226 TestMountStart/serial/RestartStopped 26.21
227 TestMountStart/serial/VerifyMountPostStop 0.35
230 TestMultiNode/serial/FreshStart2Nodes 116.18
231 TestMultiNode/serial/DeployApp2Nodes 4.21
232 TestMultiNode/serial/PingHostFrom2Pods 0.77
233 TestMultiNode/serial/AddNode 47.52
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.55
236 TestMultiNode/serial/CopyFile 6.84
237 TestMultiNode/serial/StopNode 3.34
238 TestMultiNode/serial/StartAfterStop 32.18
239 TestMultiNode/serial/RestartKeepsNodes 151.72
240 TestMultiNode/serial/DeleteNode 2.34
241 TestMultiNode/serial/StopMultiNode 25.84
242 TestMultiNode/serial/RestartMultiNode 90.35
243 TestMultiNode/serial/ValidateNameConflict 51.92
248 TestPreload 172.79
250 TestScheduledStopUnix 235.48
251 TestSkaffold 141.12
254 TestRunningBinaryUpgrade 209.57
256 TestKubernetesUpgrade 213.67
259 TestStoppedBinaryUpgrade/Setup 0.44
266 TestStoppedBinaryUpgrade/Upgrade 209.55
268 TestPause/serial/Start 161.75
270 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
271 TestNoKubernetes/serial/StartWithK8s 81.69
272 TestPause/serial/SecondStartNoReconfiguration 59.61
273 TestNoKubernetes/serial/StartWithStopK8s 17.76
274 TestNoKubernetes/serial/Start 29.08
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.45
276 TestPause/serial/Pause 0.54
277 TestPause/serial/VerifyStatus 0.22
278 TestPause/serial/Unpause 0.49
279 TestPause/serial/PauseAgain 0.66
280 TestPause/serial/DeletePaused 0.97
281 TestPause/serial/VerifyDeletedResources 0.24
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.2
294 TestNoKubernetes/serial/ProfileList 0.55
295 TestNoKubernetes/serial/Stop 2.35
296 TestNoKubernetes/serial/StartNoArgs 97.84
297 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.19
299 TestStartStop/group/old-k8s-version/serial/FirstStart 168.52
301 TestStartStop/group/no-preload/serial/FirstStart 116.72
303 TestStartStop/group/embed-certs/serial/FirstStart 121.33
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.54
306 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 107.79
307 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.11
308 TestStartStop/group/old-k8s-version/serial/Stop 13.38
309 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
310 TestStartStop/group/old-k8s-version/serial/SecondStart 410.26
311 TestStartStop/group/no-preload/serial/DeployApp 9.34
312 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.31
313 TestStartStop/group/no-preload/serial/Stop 13.39
314 TestStartStop/group/embed-certs/serial/DeployApp 9.33
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/no-preload/serial/SecondStart 324.86
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.04
318 TestStartStop/group/embed-certs/serial/Stop 13.32
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
320 TestStartStop/group/embed-certs/serial/SecondStart 305.76
321 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.37
322 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
323 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.34
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 320.99
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
328 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
329 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
330 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
331 TestStartStop/group/embed-certs/serial/Pause 2.6
332 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
333 TestStartStop/group/no-preload/serial/Pause 2.96
335 TestStartStop/group/newest-cni/serial/FirstStart 76.45
336 TestNetworkPlugins/group/auto/Start 100.16
337 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
338 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
339 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
340 TestStartStop/group/old-k8s-version/serial/Pause 2.54
341 TestNetworkPlugins/group/flannel/Start 80.19
342 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 16.01
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
345 TestStartStop/group/newest-cni/serial/Stop 8.39
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.2
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.11
349 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.34
350 TestStartStop/group/newest-cni/serial/SecondStart 40.18
351 TestNetworkPlugins/group/enable-default-cni/Start 92.26
352 TestNetworkPlugins/group/auto/KubeletFlags 0.2
353 TestNetworkPlugins/group/auto/NetCatPod 11.24
354 TestNetworkPlugins/group/auto/DNS 21.25
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
358 TestStartStop/group/newest-cni/serial/Pause 2.48
359 TestNetworkPlugins/group/bridge/Start 80
360 TestNetworkPlugins/group/flannel/ControllerPod 6.01
361 TestNetworkPlugins/group/auto/Localhost 0.14
362 TestNetworkPlugins/group/auto/HairPin 0.12
363 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
364 TestNetworkPlugins/group/flannel/NetCatPod 14.32
365 TestNetworkPlugins/group/kubenet/Start 88.74
366 TestNetworkPlugins/group/flannel/DNS 0.21
367 TestNetworkPlugins/group/flannel/Localhost 0.17
368 TestNetworkPlugins/group/flannel/HairPin 0.14
369 TestNetworkPlugins/group/calico/Start 123.36
370 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
371 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.21
372 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
373 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
374 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
375 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
376 TestNetworkPlugins/group/bridge/NetCatPod 10.26
377 TestNetworkPlugins/group/kindnet/Start 94.72
378 TestNetworkPlugins/group/bridge/DNS 0.16
379 TestNetworkPlugins/group/bridge/Localhost 0.15
380 TestNetworkPlugins/group/bridge/HairPin 0.13
381 TestNetworkPlugins/group/custom-flannel/Start 101.58
382 TestNetworkPlugins/group/kubenet/KubeletFlags 0.21
383 TestNetworkPlugins/group/kubenet/NetCatPod 10.27
384 TestNetworkPlugins/group/kubenet/DNS 0.17
385 TestNetworkPlugins/group/kubenet/Localhost 0.12
386 TestNetworkPlugins/group/kubenet/HairPin 0.15
387 TestNetworkPlugins/group/false/Start 90.02
388 TestNetworkPlugins/group/calico/ControllerPod 6.01
389 TestNetworkPlugins/group/calico/KubeletFlags 0.26
390 TestNetworkPlugins/group/calico/NetCatPod 12.31
391 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
392 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
393 TestNetworkPlugins/group/kindnet/NetCatPod 14.26
394 TestNetworkPlugins/group/calico/DNS 0.25
395 TestNetworkPlugins/group/calico/Localhost 0.2
396 TestNetworkPlugins/group/calico/HairPin 0.18
397 TestNetworkPlugins/group/kindnet/DNS 0.29
398 TestNetworkPlugins/group/kindnet/Localhost 0.3
399 TestNetworkPlugins/group/kindnet/HairPin 0.29
400 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
401 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.25
402 TestNetworkPlugins/group/custom-flannel/DNS 0.23
403 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
404 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
405 TestNetworkPlugins/group/false/KubeletFlags 0.23
406 TestNetworkPlugins/group/false/NetCatPod 12.23
407 TestNetworkPlugins/group/false/DNS 0.17
408 TestNetworkPlugins/group/false/Localhost 0.13
409 TestNetworkPlugins/group/false/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (14.88s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-485739 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-485739 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (14.878182398s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (14.88s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0703 22:46:47.463992   16676 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0703 22:46:47.464089   16676 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-485739
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-485739: exit status 85 (52.087134ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-485739 | jenkins | v1.33.1 | 03 Jul 24 22:46 UTC |          |
	|         | -p download-only-485739        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 22:46:32
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 22:46:32.621234   16687 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:46:32.621318   16687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:46:32.621324   16687 out.go:304] Setting ErrFile to fd 2...
	I0703 22:46:32.621328   16687 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:46:32.621485   16687 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	W0703 22:46:32.621591   16687 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18998-9391/.minikube/config/config.json: open /home/jenkins/minikube-integration/18998-9391/.minikube/config/config.json: no such file or directory
	I0703 22:46:32.622099   16687 out.go:298] Setting JSON to true
	I0703 22:46:32.622962   16687 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1738,"bootTime":1720045055,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:46:32.623009   16687 start.go:139] virtualization: kvm guest
	I0703 22:46:32.625311   16687 out.go:97] [download-only-485739] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 22:46:32.625411   16687 notify.go:220] Checking for updates...
	W0703 22:46:32.625452   16687 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball: no such file or directory
	I0703 22:46:32.626681   16687 out.go:169] MINIKUBE_LOCATION=18998
	I0703 22:46:32.627956   16687 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:46:32.629109   16687 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	I0703 22:46:32.630310   16687 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	I0703 22:46:32.631400   16687 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0703 22:46:32.633486   16687 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0703 22:46:32.633698   16687 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 22:46:32.730577   16687 out.go:97] Using the kvm2 driver based on user configuration
	I0703 22:46:32.730601   16687 start.go:297] selected driver: kvm2
	I0703 22:46:32.730606   16687 start.go:901] validating driver "kvm2" against <nil>
	I0703 22:46:32.730919   16687 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:46:32.731028   16687 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18998-9391/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0703 22:46:32.745206   16687 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0703 22:46:32.745249   16687 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0703 22:46:32.745760   16687 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0703 22:46:32.745931   16687 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0703 22:46:32.746001   16687 cni.go:84] Creating CNI manager for ""
	I0703 22:46:32.746020   16687 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0703 22:46:32.746077   16687 start.go:340] cluster config:
	{Name:download-only-485739 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-485739 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:46:32.746290   16687 iso.go:125] acquiring lock: {Name:mke39b31a4a84d7efedf67d51c801ff7cd79d25d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0703 22:46:32.747921   16687 out.go:97] Downloading VM boot image ...
	I0703 22:46:32.747945   16687 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18998-9391/.minikube/cache/iso/amd64/minikube-v1.33.1-1719929171-19175-amd64.iso
	I0703 22:46:35.167422   16687 out.go:97] Starting "download-only-485739" primary control-plane node in "download-only-485739" cluster
	I0703 22:46:35.167449   16687 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0703 22:46:35.189783   16687 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0703 22:46:35.189814   16687 cache.go:56] Caching tarball of preloaded images
	I0703 22:46:35.189959   16687 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0703 22:46:35.191492   16687 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0703 22:46:35.191511   16687 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0703 22:46:35.217323   16687 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0703 22:46:40.833149   16687 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0703 22:46:40.833255   16687 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0703 22:46:41.593326   16687 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0703 22:46:41.593686   16687 profile.go:143] Saving config to /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/download-only-485739/config.json ...
	I0703 22:46:41.593720   16687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/download-only-485739/config.json: {Name:mk60603a9e888f8d624fec6effb46f08c052a3fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0703 22:46:41.593904   16687 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0703 22:46:41.594102   16687 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18998-9391/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-485739 host does not exist
	  To start a cluster, run: "minikube start -p download-only-485739"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-485739
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/json-events (4.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-319183 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-319183 --force --alsologtostderr --kubernetes-version=v1.30.2 --container-runtime=docker --driver=kvm2 : (4.031793122s)
--- PASS: TestDownloadOnly/v1.30.2/json-events (4.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/preload-exists
I0703 22:46:51.784466   16676 preload.go:132] Checking if preload exists for k8s version v1.30.2 and runtime docker
I0703 22:46:51.784503   16676 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18998-9391/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.30.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-319183
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-319183: exit status 85 (52.620597ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-485739 | jenkins | v1.33.1 | 03 Jul 24 22:46 UTC |                     |
	|         | -p download-only-485739        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 03 Jul 24 22:46 UTC | 03 Jul 24 22:46 UTC |
	| delete  | -p download-only-485739        | download-only-485739 | jenkins | v1.33.1 | 03 Jul 24 22:46 UTC | 03 Jul 24 22:46 UTC |
	| start   | -o=json --download-only        | download-only-319183 | jenkins | v1.33.1 | 03 Jul 24 22:46 UTC |                     |
	|         | -p download-only-319183        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.2   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/03 22:46:47
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0703 22:46:47.788664   16915 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:46:47.788881   16915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:46:47.788888   16915 out.go:304] Setting ErrFile to fd 2...
	I0703 22:46:47.788893   16915 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:46:47.789041   16915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	I0703 22:46:47.789580   16915 out.go:298] Setting JSON to true
	I0703 22:46:47.790353   16915 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":1753,"bootTime":1720045055,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:46:47.790402   16915 start.go:139] virtualization: kvm guest
	I0703 22:46:47.792157   16915 out.go:97] [download-only-319183] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 22:46:47.792255   16915 notify.go:220] Checking for updates...
	I0703 22:46:47.793326   16915 out.go:169] MINIKUBE_LOCATION=18998
	I0703 22:46:47.794515   16915 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:46:47.795781   16915 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	I0703 22:46:47.796859   16915 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	I0703 22:46:47.797958   16915 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-319183 host does not exist
	  To start a cluster, run: "minikube start -p download-only-319183"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.2/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAll (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.2/DeleteAll (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-319183
--- PASS: TestDownloadOnly/v1.30.2/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestBinaryMirror (0.53s)

                                                
                                                
=== RUN   TestBinaryMirror
I0703 22:46:52.288661   16676 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-246843 --alsologtostderr --binary-mirror http://127.0.0.1:41783 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-246843" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-246843
--- PASS: TestBinaryMirror (0.53s)

                                                
                                    
x
+
TestOffline (108.37s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-921585 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-921585 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (1m47.578255713s)
helpers_test.go:175: Cleaning up "offline-docker-921585" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-921585
--- PASS: TestOffline (108.37s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-765846
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-765846: exit status 85 (43.435394ms)

                                                
                                                
-- stdout --
	* Profile "addons-765846" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-765846"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-765846
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-765846: exit status 85 (44.490994ms)

                                                
                                                
-- stdout --
	* Profile "addons-765846" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-765846"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (231.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-765846 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-765846 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m51.467069594s)
--- PASS: TestAddons/Setup (231.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 21.676507ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7w46d" [85b2a71d-c8a1-4539-8af5-86acd0521c31] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007583466s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nqhd5" [ca05a992-57a2-4aaf-82ad-5885860da716] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006336917s
addons_test.go:342: (dbg) Run:  kubectl --context addons-765846 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-765846 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-765846 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.697900584s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 ip
2024/07/03 22:51:00 [DEBUG] GET http://192.168.39.187:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.57s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-765846 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-765846 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-765846 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b92784fe-df79-4459-9f0e-6f084f9ee792] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b92784fe-df79-4459-9f0e-6f084f9ee792] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.004387694s
I0703 22:51:08.957385   16676 kapi.go:184] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-765846 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.187
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-765846 addons disable ingress-dns --alsologtostderr -v=1: (1.748289338s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-765846 addons disable ingress --alsologtostderr -v=1: (7.759791244s)
--- PASS: TestAddons/parallel/Ingress (22.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.16s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-qjqdc" [9bb78d5e-7e1f-48d4-88d3-862b8feb411f] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004654179s
addons_test.go:843: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-765846
addons_test.go:843: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-765846: (6.150611296s)
--- PASS: TestAddons/parallel/InspektorGadget (12.16s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.85s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.8743ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-z8qdn" [84429af2-458c-42f1-b052-607bbd9da0e5] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004873124s
addons_test.go:417: (dbg) Run:  kubectl --context addons-765846 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.85s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 21.227418ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-nmvp5" [9f190b1e-9143-46c0-b581-005b99278984] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.010056726s
addons_test.go:475: (dbg) Run:  kubectl --context addons-765846 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-765846 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.261002271s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.98s)

                                                
                                    
x
+
TestAddons/parallel/CSI (42.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0703 22:51:13.955183   16676 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0703 22:51:13.964041   16676 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0703 22:51:13.964062   16676 kapi.go:107] duration metric: took 8.88435ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:563: csi-hostpath-driver pods stabilized in 8.892332ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-765846 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-765846 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [197c7561-f27c-4472-857a-a15cd57d5a5a] Pending
helpers_test.go:344: "task-pv-pod" [197c7561-f27c-4472-857a-a15cd57d5a5a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [197c7561-f27c-4472-857a-a15cd57d5a5a] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.004725815s
addons_test.go:586: (dbg) Run:  kubectl --context addons-765846 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-765846 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-765846 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-765846 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-765846 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-765846 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-765846 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [470bc017-59b5-40e5-9d3a-1e16561dfa46] Pending
helpers_test.go:344: "task-pv-pod-restore" [470bc017-59b5-40e5-9d3a-1e16561dfa46] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [470bc017-59b5-40e5-9d3a-1e16561dfa46] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.006723202s
addons_test.go:628: (dbg) Run:  kubectl --context addons-765846 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-765846 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-765846 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-linux-amd64 -p addons-765846 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.698736173s)
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (42.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (13.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-765846 --alsologtostderr -v=1
addons_test.go:826: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-765846 --alsologtostderr -v=1: (1.602654016s)
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-n6jph" [f8cb4a75-c3e5-4aa2-976c-12309def3282] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-n6jph" [f8cb4a75-c3e5-4aa2-976c-12309def3282] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004188944s
--- PASS: TestAddons/parallel/Headlamp (13.61s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-jz8cv" [2e265349-1f5a-4382-8939-f4d75b7aa477] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00333771s
addons_test.go:862: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-765846
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.08s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-765846 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-765846 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [888c6379-3f8a-47dd-a864-147b5ca2c3a6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [888c6379-3f8a-47dd-a864-147b5ca2c3a6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [888c6379-3f8a-47dd-a864-147b5ca2c3a6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004127635s
addons_test.go:992: (dbg) Run:  kubectl --context addons-765846 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 ssh "cat /opt/local-path-provisioner/pvc-ef44cd56-d34a-4fdd-b33b-fb80152d54ed_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-765846 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-765846 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.08s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-vqftd" [66c31480-af89-4c6c-904e-4f222c65a742] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005114596s
addons_test.go:1056: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-765846
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-7j86c" [f059b936-5668-41ab-93c3-842a4aa84d92] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004695431s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (44.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 20.656486ms
addons_test.go:897: volcano-admission stabilized in 20.723158ms
addons_test.go:905: volcano-controller stabilized in 22.91524ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-jvv72" [7c7b2351-1f30-4f38-8311-43b6d45c4b7b] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.00692083s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-w67b4" [ddf623a6-3aa5-41e0-ad4f-5e6cdfd2650a] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.007206692s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-w25t5" [4285369e-35c4-4f2a-a0cb-8822d8fb199d] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003621592s
addons_test.go:924: (dbg) Run:  kubectl --context addons-765846 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-765846 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-765846 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [128239d3-1b0c-40a9-a6cf-848352c358d4] Pending
helpers_test.go:344: "test-job-nginx-0" [128239d3-1b0c-40a9-a6cf-848352c358d4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [128239d3-1b0c-40a9-a6cf-848352c358d4] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 17.003196116s
addons_test.go:960: (dbg) Run:  out/minikube-linux-amd64 -p addons-765846 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-linux-amd64 -p addons-765846 addons disable volcano --alsologtostderr -v=1: (11.23184254s)
--- PASS: TestAddons/parallel/Volcano (44.91s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-765846 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-765846 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.53s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-765846
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-765846: (13.286686994s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-765846
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-765846
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-765846
--- PASS: TestAddons/StoppedEnableDisable (13.53s)

                                                
                                    
x
+
TestCertOptions (114.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-192591 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-192591 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m53.015198029s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-192591 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-192591 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-192591 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-192591" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-192591
--- PASS: TestCertOptions (114.40s)

                                                
                                    
x
+
TestCertExpiration (341.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-511634 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0703 23:43:21.593274   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:21.598568   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:21.608815   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:21.629064   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:21.669335   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:21.749642   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:21.910026   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:22.230343   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:22.871353   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:24.152439   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:26.712763   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:31.833397   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:43:42.074267   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-511634 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m42.05575264s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-511634 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0703 23:48:03.966019   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:48:21.593279   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:48:49.277257   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-511634 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (58.749199086s)
helpers_test.go:175: Cleaning up "cert-expiration-511634" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-511634
--- PASS: TestCertExpiration (341.64s)

                                                
                                    
x
+
TestDockerFlags (103.09s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-898992 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-898992 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m41.306142916s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-898992 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-898992 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-898992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-898992
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-898992: (1.338695356s)
--- PASS: TestDockerFlags (103.09s)

                                                
                                    
x
+
TestForceSystemdFlag (98.38s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-268748 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-268748 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (1m36.866497591s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-268748 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-268748" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-268748
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-268748: (1.273503129s)
--- PASS: TestForceSystemdFlag (98.38s)

                                                
                                    
x
+
TestForceSystemdEnv (71.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-771014 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-771014 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (1m10.543744677s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-771014 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-771014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-771014
--- PASS: TestForceSystemdEnv (71.54s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.52s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0703 23:38:33.560368   16676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0703 23:38:33.560472   16676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0703 23:38:33.595645   16676 install.go:62] docker-machine-driver-kvm2: exit status 1
W0703 23:38:33.596018   16676 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0703 23:38:33.596081   16676 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3464162070/001/docker-machine-driver-kvm2
I0703 23:38:33.815402   16676 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3464162070/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0] Decompressors:map[bz2:0xc0004ccd10 gz:0xc0004ccd18 tar:0xc0004cc8b0 tar.bz2:0xc0004cc8c0 tar.gz:0xc0004cc8d0 tar.xz:0xc0004cc8e0 tar.zst:0xc0004cc8f0 tbz2:0xc0004cc8c0 tgz:0xc0004cc8d0 txz:0xc0004cc8e0 tzst:0xc0004cc8f0 xz:0xc0004ccd20 zip:0xc0004ccd30 zst:0xc0004ccd28] Getters:map[file:0xc000069690 http:0xc0015565f0 https:0xc001556640] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0703 23:38:33.815475   16676 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3464162070/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.52s)

                                                
                                    
x
+
TestErrorSpam/setup (51.43s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-147129 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-147129 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-147129 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-147129 --driver=kvm2 : (51.425979711s)
--- PASS: TestErrorSpam/setup (51.43s)

                                                
                                    
x
+
TestErrorSpam/start (0.32s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 start --dry-run
--- PASS: TestErrorSpam/start (0.32s)

                                                
                                    
x
+
TestErrorSpam/status (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 status
--- PASS: TestErrorSpam/status (0.72s)

                                                
                                    
x
+
TestErrorSpam/pause (1.22s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 pause
--- PASS: TestErrorSpam/pause (1.22s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 unpause
--- PASS: TestErrorSpam/unpause (1.23s)

                                                
                                    
x
+
TestErrorSpam/stop (6.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 stop: (3.501450188s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 stop: (1.157320359s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-147129 --log_dir /tmp/nospam-147129 stop: (1.772635219s)
--- PASS: TestErrorSpam/stop (6.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18998-9391/.minikube/files/etc/test/nested/copy/16676/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (104.66s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377836 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-377836 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m44.655841378s)
--- PASS: TestFunctional/serial/StartWithProxy (104.66s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.53s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0703 22:55:15.795823   16676 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377836 --alsologtostderr -v=8
E0703 22:55:44.338662   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:44.343914   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:44.354165   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:44.374387   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:44.414643   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:44.494965   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:44.655368   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:44.975939   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:45.616822   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:46.897782   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:49.457969   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:55:54.578911   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-377836 --alsologtostderr -v=8: (39.529987495s)
functional_test.go:659: soft start took 39.530585308s for "functional-377836" cluster.
I0703 22:55:55.326113   16676 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestFunctional/serial/SoftStart (39.53s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-377836 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-377836 /tmp/TestFunctionalserialCacheCmdcacheadd_local2413679218/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cache add minikube-local-cache-test:functional-377836
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cache delete minikube-local-cache-test:functional-377836
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-377836
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (219.92201ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 kubectl -- --context functional-377836 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-377836 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (115.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377836 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0703 22:56:04.819983   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:56:25.300345   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 22:57:06.260893   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-377836 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m55.460588213s)
functional_test.go:757: restart took 1m55.460698042s for "functional-377836" cluster.
I0703 22:57:55.935061   16676 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestFunctional/serial/ExtraConfig (115.46s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 logs
--- PASS: TestFunctional/serial/LogsCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 logs --file /tmp/TestFunctionalserialLogsFileCmd3257055913/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.93s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-377836 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-377836
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-377836: exit status 115 (261.656288ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.219:30266 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-377836 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.32s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 config get cpus: exit status 14 (51.670703ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 config get cpus: exit status 14 (40.743623ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-377836 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-377836 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 25025: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377836 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-377836 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (122.025302ms)

                                                
                                                
-- stdout --
	* [functional-377836] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 22:58:37.319361   24878 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:58:37.319739   24878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:58:37.319752   24878 out.go:304] Setting ErrFile to fd 2...
	I0703 22:58:37.319759   24878 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:58:37.320161   24878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	I0703 22:58:37.320997   24878 out.go:298] Setting JSON to false
	I0703 22:58:37.322139   24878 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2462,"bootTime":1720045055,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:58:37.322196   24878 start.go:139] virtualization: kvm guest
	I0703 22:58:37.323957   24878 out.go:177] * [functional-377836] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0703 22:58:37.325423   24878 notify.go:220] Checking for updates...
	I0703 22:58:37.325432   24878 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 22:58:37.326509   24878 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:58:37.327844   24878 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	I0703 22:58:37.329012   24878 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	I0703 22:58:37.330364   24878 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 22:58:37.331611   24878 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 22:58:37.333077   24878 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 22:58:37.333460   24878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:58:37.333524   24878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:58:37.348723   24878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45561
	I0703 22:58:37.349071   24878 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:58:37.349556   24878 main.go:141] libmachine: Using API Version  1
	I0703 22:58:37.349578   24878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:58:37.349937   24878 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:58:37.350153   24878 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:58:37.350402   24878 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 22:58:37.350823   24878 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:58:37.350869   24878 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:58:37.364840   24878 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I0703 22:58:37.365127   24878 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:58:37.365537   24878 main.go:141] libmachine: Using API Version  1
	I0703 22:58:37.365557   24878 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:58:37.365835   24878 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:58:37.366004   24878 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:58:37.394719   24878 out.go:177] * Using the kvm2 driver based on existing profile
	I0703 22:58:37.395823   24878 start.go:297] selected driver: kvm2
	I0703 22:58:37.395839   24878 start.go:901] validating driver "kvm2" against &{Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:58:37.395933   24878 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 22:58:37.397855   24878 out.go:177] 
	W0703 22:58:37.398927   24878 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0703 22:58:37.399952   24878 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377836 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-377836 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-377836 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (128.540601ms)

                                                
                                                
-- stdout --
	* [functional-377836] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 22:58:36.400808   24656 out.go:291] Setting OutFile to fd 1 ...
	I0703 22:58:36.400902   24656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:58:36.400911   24656 out.go:304] Setting ErrFile to fd 2...
	I0703 22:58:36.400915   24656 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 22:58:36.401121   24656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	I0703 22:58:36.401614   24656 out.go:298] Setting JSON to false
	I0703 22:58:36.402541   24656 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":2461,"bootTime":1720045055,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1062-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0703 22:58:36.402589   24656 start.go:139] virtualization: kvm guest
	I0703 22:58:36.404412   24656 out.go:177] * [functional-377836] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0703 22:58:36.405617   24656 notify.go:220] Checking for updates...
	I0703 22:58:36.405630   24656 out.go:177]   - MINIKUBE_LOCATION=18998
	I0703 22:58:36.406748   24656 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0703 22:58:36.407801   24656 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	I0703 22:58:36.408922   24656 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	I0703 22:58:36.410077   24656 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0703 22:58:36.411249   24656 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0703 22:58:36.412679   24656 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 22:58:36.413270   24656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:58:36.413336   24656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:58:36.429440   24656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44171
	I0703 22:58:36.430140   24656 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:58:36.430810   24656 main.go:141] libmachine: Using API Version  1
	I0703 22:58:36.430839   24656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:58:36.431223   24656 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:58:36.431443   24656 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:58:36.431709   24656 driver.go:392] Setting default libvirt URI to qemu:///system
	I0703 22:58:36.432029   24656 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 22:58:36.432074   24656 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 22:58:36.446375   24656 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41719
	I0703 22:58:36.446760   24656 main.go:141] libmachine: () Calling .GetVersion
	I0703 22:58:36.447229   24656 main.go:141] libmachine: Using API Version  1
	I0703 22:58:36.447254   24656 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 22:58:36.447535   24656 main.go:141] libmachine: () Calling .GetMachineName
	I0703 22:58:36.447697   24656 main.go:141] libmachine: (functional-377836) Calling .DriverName
	I0703 22:58:36.479293   24656 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0703 22:58:36.480462   24656 start.go:297] selected driver: kvm2
	I0703 22:58:36.480481   24656 start.go:901] validating driver "kvm2" against &{Name:functional-377836 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19175/minikube-v1.33.1-1719929171-19175-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1719972989-19184@sha256:86cb76941aa00fc70e665895234bda20991d5563e39b8ff07212e31a82ce7fb1 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.2 ClusterName:functional-377836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.219 Port:8441 KubernetesVersion:v1.30.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0
s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0703 22:58:36.480635   24656 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0703 22:58:36.483153   24656 out.go:177] 
	W0703 22:58:36.484499   24656 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0703 22:58:36.485690   24656 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-377836 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-377836 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-7s6ql" [35ffa42c-a8fc-474f-ae70-7344a0a476fe] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-7s6ql" [35ffa42c-a8fc-474f-ae70-7344a0a476fe] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 28.003695018s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.219:32616
functional_test.go:1671: http://192.168.39.219:32616: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-7s6ql

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.219:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.219:32616
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.43s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (54.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [041fa0c0-0c71-426a-bffc-b59b57c3b224] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.238693808s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-377836 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-377836 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-377836 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-377836 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7e521514-c146-479a-9cdb-fb035d88e2b6] Pending
helpers_test.go:344: "sp-pod" [7e521514-c146-479a-9cdb-fb035d88e2b6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7e521514-c146-479a-9cdb-fb035d88e2b6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 30.004008058s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-377836 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-377836 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-377836 delete -f testdata/storage-provisioner/pod.yaml: (1.04244128s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-377836 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb2b10ab-1a12-41a1-93f0-5424936586c3] Pending
helpers_test.go:344: "sp-pod" [bb2b10ab-1a12-41a1-93f0-5424936586c3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb2b10ab-1a12-41a1-93f0-5424936586c3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.003510512s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-377836 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (54.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh -n functional-377836 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cp functional-377836:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3053905494/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh -n functional-377836 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh -n functional-377836 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (31.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-377836 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-rjm2c" [8f4182bf-a92b-406e-b959-2c66a35ebfa8] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-rjm2c" [8f4182bf-a92b-406e-b959-2c66a35ebfa8] Running
E0703 22:58:28.181130   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 25.009172798s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377836 exec mysql-64454c8b5c-rjm2c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-377836 exec mysql-64454c8b5c-rjm2c -- mysql -ppassword -e "show databases;": exit status 1 (321.422764ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0703 22:58:30.296450   16676 retry.go:31] will retry after 1.180614195s: exit status 1
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377836 exec mysql-64454c8b5c-rjm2c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-377836 exec mysql-64454c8b5c-rjm2c -- mysql -ppassword -e "show databases;": exit status 1 (208.631794ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0703 22:58:31.686806   16676 retry.go:31] will retry after 1.530877817s: exit status 1
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377836 exec mysql-64454c8b5c-rjm2c -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-377836 exec mysql-64454c8b5c-rjm2c -- mysql -ppassword -e "show databases;": exit status 1 (210.692823ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0703 22:58:33.429470   16676 retry.go:31] will retry after 2.983231016s: exit status 1
functional_test.go:1803: (dbg) Run:  kubectl --context functional-377836 exec mysql-64454c8b5c-rjm2c -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (31.81s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16676/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo cat /etc/test/nested/copy/16676/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16676.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo cat /etc/ssl/certs/16676.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16676.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo cat /usr/share/ca-certificates/16676.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/166762.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo cat /etc/ssl/certs/166762.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/166762.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo cat /usr/share/ca-certificates/166762.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-377836 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 ssh "sudo systemctl is-active crio": exit status 1 (238.084674ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377836 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.2
registry.k8s.io/kube-proxy:v1.30.2
registry.k8s.io/kube-controller-manager:v1.30.2
registry.k8s.io/kube-apiserver:v1.30.2
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-377836
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-377836
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377836 image ls --format short --alsologtostderr:
I0703 22:58:45.390724   25599 out.go:291] Setting OutFile to fd 1 ...
I0703 22:58:45.390855   25599 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:45.390866   25599 out.go:304] Setting ErrFile to fd 2...
I0703 22:58:45.390873   25599 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:45.391410   25599 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
I0703 22:58:45.392627   25599 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:45.392763   25599 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:45.393334   25599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:45.393393   25599 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:45.407918   25599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45179
I0703 22:58:45.408331   25599 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:45.408802   25599 main.go:141] libmachine: Using API Version  1
I0703 22:58:45.408823   25599 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:45.409135   25599 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:45.409392   25599 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:58:45.410985   25599 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:45.411019   25599 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:45.425009   25599 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43725
I0703 22:58:45.425357   25599 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:45.425838   25599 main.go:141] libmachine: Using API Version  1
I0703 22:58:45.425858   25599 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:45.426165   25599 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:45.426347   25599 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:58:45.426510   25599 ssh_runner.go:195] Run: systemctl --version
I0703 22:58:45.426530   25599 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:58:45.429454   25599 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:45.429858   25599 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:58:45.429890   25599 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:45.430100   25599 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:58:45.430266   25599 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:58:45.430405   25599 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:58:45.430538   25599 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:58:45.534148   25599 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0703 22:58:45.582050   25599 main.go:141] libmachine: Making call to close driver server
I0703 22:58:45.582063   25599 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:45.582326   25599 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:45.582352   25599 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:58:45.582353   25599 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:58:45.582366   25599 main.go:141] libmachine: Making call to close driver server
I0703 22:58:45.582374   25599 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:45.582553   25599 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:45.582567   25599 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377836 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-proxy                  | v1.30.2           | 53c535741fb44 | 84.7MB |
| gcr.io/google-containers/addon-resizer      | functional-377836 | ffd4cfbbe753e | 32.9MB |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.2           | e874818b3caac | 111MB  |
| registry.k8s.io/kube-scheduler              | v1.30.2           | 7820c83aa1394 | 62MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-377836 | 0881a585c8c96 | 30B    |
| registry.k8s.io/kube-apiserver              | v1.30.2           | 56ce0fd9fb532 | 117MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377836 image ls --format table --alsologtostderr:
I0703 22:58:47.185197   25922 out.go:291] Setting OutFile to fd 1 ...
I0703 22:58:47.185304   25922 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:47.185312   25922 out.go:304] Setting ErrFile to fd 2...
I0703 22:58:47.185317   25922 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:47.185524   25922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
I0703 22:58:47.186806   25922 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:47.187007   25922 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:47.187825   25922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:47.187879   25922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:47.202648   25922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35673
I0703 22:58:47.203144   25922 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:47.203748   25922 main.go:141] libmachine: Using API Version  1
I0703 22:58:47.203766   25922 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:47.204120   25922 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:47.204301   25922 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:58:47.206084   25922 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:47.206134   25922 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:47.220293   25922 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33381
I0703 22:58:47.220622   25922 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:47.221021   25922 main.go:141] libmachine: Using API Version  1
I0703 22:58:47.221043   25922 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:47.221383   25922 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:47.221553   25922 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:58:47.221714   25922 ssh_runner.go:195] Run: systemctl --version
I0703 22:58:47.221738   25922 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:58:47.224195   25922 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:47.224570   25922 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:58:47.224594   25922 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:47.224755   25922 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:58:47.224890   25922 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:58:47.225016   25922 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:58:47.225174   25922 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:58:47.310274   25922 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0703 22:58:47.364231   25922 main.go:141] libmachine: Making call to close driver server
I0703 22:58:47.364252   25922 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:47.364501   25922 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:47.364523   25922 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:58:47.364531   25922 main.go:141] libmachine: Making call to close driver server
I0703 22:58:47.364538   25922 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:47.364739   25922 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:47.364753   25922 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377836 image ls --format json --alsologtostderr:
[{"id":"0881a585c8c9672e5baf78525dc9223713b0a68bc98b895f93f0052ae1da3dcc","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-377836"],"size":"30"},{"id":"e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.2"],"size":"111000000"},{"id":"7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.2"],"size":"62000000"},{"id":"53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.2"],"size":"84700000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba3820
18ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-377836"],"size":"32900000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[
],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.2"],"size":"117000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377836 image ls --format json --alsologtostderr:
I0703 22:58:46.986373   25899 out.go:291] Setting OutFile to fd 1 ...
I0703 22:58:46.986608   25899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:46.986616   25899 out.go:304] Setting ErrFile to fd 2...
I0703 22:58:46.986620   25899 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:46.986768   25899 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
I0703 22:58:46.987288   25899 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:46.987379   25899 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:46.987750   25899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:46.987798   25899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:47.002661   25899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43311
I0703 22:58:47.003096   25899 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:47.003619   25899 main.go:141] libmachine: Using API Version  1
I0703 22:58:47.003644   25899 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:47.003940   25899 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:47.004115   25899 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:58:47.006037   25899 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:47.006086   25899 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:47.019964   25899 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43385
I0703 22:58:47.020358   25899 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:47.020840   25899 main.go:141] libmachine: Using API Version  1
I0703 22:58:47.020857   25899 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:47.021210   25899 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:47.021388   25899 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:58:47.021589   25899 ssh_runner.go:195] Run: systemctl --version
I0703 22:58:47.021609   25899 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:58:47.023933   25899 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:47.024301   25899 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:58:47.024332   25899 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:47.024413   25899 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:58:47.024585   25899 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:58:47.024731   25899 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:58:47.024864   25899 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:58:47.104088   25899 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0703 22:58:47.136513   25899 main.go:141] libmachine: Making call to close driver server
I0703 22:58:47.136524   25899 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:47.136856   25899 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:58:47.136862   25899 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:47.136893   25899 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:58:47.136909   25899 main.go:141] libmachine: Making call to close driver server
I0703 22:58:47.136922   25899 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:47.137167   25899 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:47.137182   25899 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:58:47.137206   25899 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377836 image ls --format yaml --alsologtostderr:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.2
size: "111000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.2
size: "84700000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-377836
size: "32900000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.2
size: "62000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0881a585c8c9672e5baf78525dc9223713b0a68bc98b895f93f0052ae1da3dcc
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-377836
size: "30"
- id: 56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.2
size: "117000000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377836 image ls --format yaml --alsologtostderr:
I0703 22:58:45.635488   25720 out.go:291] Setting OutFile to fd 1 ...
I0703 22:58:45.635780   25720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:45.635792   25720 out.go:304] Setting ErrFile to fd 2...
I0703 22:58:45.635799   25720 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:45.636051   25720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
I0703 22:58:45.636799   25720 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:45.636958   25720 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:45.637545   25720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:45.637601   25720 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:45.652866   25720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40753
I0703 22:58:45.653280   25720 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:45.654097   25720 main.go:141] libmachine: Using API Version  1
I0703 22:58:45.654134   25720 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:45.654606   25720 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:45.654842   25720 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:58:45.656905   25720 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:45.656947   25720 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:45.671475   25720 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41849
I0703 22:58:45.671905   25720 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:45.672348   25720 main.go:141] libmachine: Using API Version  1
I0703 22:58:45.672371   25720 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:45.672729   25720 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:45.672913   25720 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:58:45.673127   25720 ssh_runner.go:195] Run: systemctl --version
I0703 22:58:45.673147   25720 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:58:45.675888   25720 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:45.676344   25720 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:58:45.676375   25720 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:45.676479   25720 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:58:45.676629   25720 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:58:45.676787   25720 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:58:45.676891   25720 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:58:45.784291   25720 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0703 22:58:45.839234   25720 main.go:141] libmachine: Making call to close driver server
I0703 22:58:45.839250   25720 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:45.839552   25720 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:45.839565   25720 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:58:45.839569   25720 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:58:45.839573   25720 main.go:141] libmachine: Making call to close driver server
I0703 22:58:45.839583   25720 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:45.839820   25720 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:45.839887   25720 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:58:45.839919   25720 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 ssh pgrep buildkitd: exit status 1 (182.096535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image build -t localhost/my-image:functional-377836 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 image build -t localhost/my-image:functional-377836 testdata/build --alsologtostderr: (2.310823012s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-377836 image build -t localhost/my-image:functional-377836 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in fd3645087be4
---> Removed intermediate container fd3645087be4
---> 0760e8758206
Step 3/3 : ADD content.txt /
---> 3658756a1d6a
Successfully built 3658756a1d6a
Successfully tagged localhost/my-image:functional-377836
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-377836 image build -t localhost/my-image:functional-377836 testdata/build --alsologtostderr:
I0703 22:58:46.066317   25774 out.go:291] Setting OutFile to fd 1 ...
I0703 22:58:46.066475   25774 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:46.066484   25774 out.go:304] Setting ErrFile to fd 2...
I0703 22:58:46.066488   25774 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0703 22:58:46.066671   25774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
I0703 22:58:46.067162   25774 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:46.067611   25774 config.go:182] Loaded profile config "functional-377836": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
I0703 22:58:46.067947   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:46.068002   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:46.082815   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42683
I0703 22:58:46.083285   25774 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:46.083819   25774 main.go:141] libmachine: Using API Version  1
I0703 22:58:46.083839   25774 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:46.084208   25774 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:46.084372   25774 main.go:141] libmachine: (functional-377836) Calling .GetState
I0703 22:58:46.086102   25774 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0703 22:58:46.086135   25774 main.go:141] libmachine: Launching plugin server for driver kvm2
I0703 22:58:46.100924   25774 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42719
I0703 22:58:46.101320   25774 main.go:141] libmachine: () Calling .GetVersion
I0703 22:58:46.101851   25774 main.go:141] libmachine: Using API Version  1
I0703 22:58:46.101881   25774 main.go:141] libmachine: () Calling .SetConfigRaw
I0703 22:58:46.102312   25774 main.go:141] libmachine: () Calling .GetMachineName
I0703 22:58:46.102526   25774 main.go:141] libmachine: (functional-377836) Calling .DriverName
I0703 22:58:46.102710   25774 ssh_runner.go:195] Run: systemctl --version
I0703 22:58:46.102736   25774 main.go:141] libmachine: (functional-377836) Calling .GetSSHHostname
I0703 22:58:46.105480   25774 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:46.105877   25774 main.go:141] libmachine: (functional-377836) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:06:52:1f", ip: ""} in network mk-functional-377836: {Iface:virbr1 ExpiryTime:2024-07-03 23:53:45 +0000 UTC Type:0 Mac:52:54:00:06:52:1f Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:functional-377836 Clientid:01:52:54:00:06:52:1f}
I0703 22:58:46.105910   25774 main.go:141] libmachine: (functional-377836) DBG | domain functional-377836 has defined IP address 192.168.39.219 and MAC address 52:54:00:06:52:1f in network mk-functional-377836
I0703 22:58:46.106070   25774 main.go:141] libmachine: (functional-377836) Calling .GetSSHPort
I0703 22:58:46.106231   25774 main.go:141] libmachine: (functional-377836) Calling .GetSSHKeyPath
I0703 22:58:46.106421   25774 main.go:141] libmachine: (functional-377836) Calling .GetSSHUsername
I0703 22:58:46.106567   25774 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/functional-377836/id_rsa Username:docker}
I0703 22:58:46.188430   25774 build_images.go:161] Building image from path: /tmp/build.3562188776.tar
I0703 22:58:46.188507   25774 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0703 22:58:46.207766   25774 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3562188776.tar
I0703 22:58:46.212355   25774 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3562188776.tar: stat -c "%s %y" /var/lib/minikube/build/build.3562188776.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3562188776.tar': No such file or directory
I0703 22:58:46.212391   25774 ssh_runner.go:362] scp /tmp/build.3562188776.tar --> /var/lib/minikube/build/build.3562188776.tar (3072 bytes)
I0703 22:58:46.252261   25774 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3562188776
I0703 22:58:46.270776   25774 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3562188776 -xf /var/lib/minikube/build/build.3562188776.tar
I0703 22:58:46.291797   25774 docker.go:360] Building image: /var/lib/minikube/build/build.3562188776
I0703 22:58:46.291912   25774 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-377836 /var/lib/minikube/build/build.3562188776
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0703 22:58:48.310932   25774 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-377836 /var/lib/minikube/build/build.3562188776: (2.018989007s)
I0703 22:58:48.311022   25774 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3562188776
I0703 22:58:48.322876   25774 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3562188776.tar
I0703 22:58:48.334122   25774 build_images.go:217] Built localhost/my-image:functional-377836 from /tmp/build.3562188776.tar
I0703 22:58:48.334147   25774 build_images.go:133] succeeded building to: functional-377836
I0703 22:58:48.334151   25774 build_images.go:134] failed building to: 
I0703 22:58:48.334175   25774 main.go:141] libmachine: Making call to close driver server
I0703 22:58:48.334192   25774 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:48.334454   25774 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:48.334473   25774 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:58:48.334495   25774 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
I0703 22:58:48.334499   25774 main.go:141] libmachine: Making call to close driver server
I0703 22:58:48.334511   25774 main.go:141] libmachine: (functional-377836) Calling .Close
I0703 22:58:48.334722   25774 main.go:141] libmachine: Successfully made call to close driver server
I0703 22:58:48.334739   25774 main.go:141] libmachine: Making call to close connection to plugin binary
I0703 22:58:48.334757   25774 main.go:141] libmachine: (functional-377836) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls
2024/07/03 22:58:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.331104953s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-377836
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "261.594359ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "49.661817ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-377836 docker-env) && out/minikube-linux-amd64 status -p functional-377836"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-377836 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "256.887715ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "42.71319ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image load --daemon gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 image load --daemon gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr: (4.843876288s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image load --daemon gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 image load --daemon gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr: (2.800327284s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.264385324s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-377836
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image load --daemon gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 image load --daemon gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr: (4.277645819s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image save gcr.io/google-containers/addon-resizer:functional-377836 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 image save gcr.io/google-containers/addon-resizer:functional-377836 /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.494436034s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image rm gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 image load /home/jenkins/workspace/KVM_Linux_integration/addon-resizer-save.tar --alsologtostderr: (1.475304642s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-377836
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 image save --daemon gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 image save --daemon gcr.io/google-containers/addon-resizer:functional-377836 --alsologtostderr: (1.597927962s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-377836
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-377836 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-377836 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-tjj96" [eb38e782-1f09-4b13-af02-acf97e818712] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-tjj96" [eb38e782-1f09-4b13-af02-acf97e818712] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.006490694s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdany-port3697182671/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1720047516491542679" to /tmp/TestFunctionalparallelMountCmdany-port3697182671/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1720047516491542679" to /tmp/TestFunctionalparallelMountCmdany-port3697182671/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1720047516491542679" to /tmp/TestFunctionalparallelMountCmdany-port3697182671/001/test-1720047516491542679
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (204.872043ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 22:58:36.696756   16676 retry.go:31] will retry after 360.445642ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul  3 22:58 created-by-test
-rw-r--r-- 1 docker docker 24 Jul  3 22:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul  3 22:58 test-1720047516491542679
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh cat /mount-9p/test-1720047516491542679
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-377836 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9e0c7802-d0d9-4b06-a537-993e8f74c79b] Pending
helpers_test.go:344: "busybox-mount" [9e0c7802-d0d9-4b06-a537-993e8f74c79b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9e0c7802-d0d9-4b06-a537-993e8f74c79b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9e0c7802-d0d9-4b06-a537-993e8f74c79b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004903438s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-377836 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdany-port3697182671/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 service list: (1.305550921s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-377836 service list -o json: (1.251731797s)
functional_test.go:1490: Took "1.251832048s" to run "out/minikube-linux-amd64 -p functional-377836 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.219:31214
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdspecific-port659908898/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.879706ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 22:58:44.006525   16676 retry.go:31] will retry after 271.872213ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdspecific-port659908898/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 ssh "sudo umount -f /mount-9p": exit status 1 (210.466135ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-377836 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdspecific-port659908898/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.219:31214
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdVerifyCleanup501993937/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdVerifyCleanup501993937/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdVerifyCleanup501993937/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T" /mount1: exit status 1 (302.135156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0703 22:58:45.737391   16676 retry.go:31] will retry after 602.485321ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-377836 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-377836 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdVerifyCleanup501993937/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdVerifyCleanup501993937/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-377836 /tmp/TestFunctionalparallelMountCmdVerifyCleanup501993937/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-377836
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-377836
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-377836
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestGvisorAddon (222.63s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-957530 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0703 23:44:02.555090   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
gvisor_addon_test.go:52: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-957530 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (1m35.398923939s)
gvisor_addon_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-957530 cache add gcr.io/k8s-minikube/gvisor-addon:2
E0703 23:45:27.385299   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
gvisor_addon_test.go:58: (dbg) Done: out/minikube-linux-amd64 -p gvisor-957530 cache add gcr.io/k8s-minikube/gvisor-addon:2: (22.766678705s)
gvisor_addon_test.go:63: (dbg) Run:  out/minikube-linux-amd64 -p gvisor-957530 addons enable gvisor
E0703 23:45:44.338574   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
gvisor_addon_test.go:63: (dbg) Done: out/minikube-linux-amd64 -p gvisor-957530 addons enable gvisor: (3.631332768s)
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [31b285be-87cb-4c6e-8b9d-f9769882e17e] Running
gvisor_addon_test.go:68: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.035761918s
gvisor_addon_test.go:73: (dbg) Run:  kubectl --context gvisor-957530 replace --force -f testdata/nginx-gvisor.yaml
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [eb9980c7-74a5-4508-8953-aa8a75ef06cd] Pending
helpers_test.go:344: "nginx-gvisor" [eb9980c7-74a5-4508-8953-aa8a75ef06cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-gvisor" [eb9980c7-74a5-4508-8953-aa8a75ef06cd] Running
gvisor_addon_test.go:78: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 16.003546774s
gvisor_addon_test.go:83: (dbg) Run:  out/minikube-linux-amd64 stop -p gvisor-957530
gvisor_addon_test.go:83: (dbg) Done: out/minikube-linux-amd64 stop -p gvisor-957530: (2.30768411s)
gvisor_addon_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-957530 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
gvisor_addon_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p gvisor-957530 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : (53.064380804s)
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "kubernetes.io/minikube-addons=gvisor" in namespace "kube-system" ...
helpers_test.go:344: "gvisor" [31b285be-87cb-4c6e-8b9d-f9769882e17e] Running / Ready:ContainersNotReady (containers with unready status: [gvisor]) / ContainersReady:ContainersNotReady (containers with unready status: [gvisor])
helpers_test.go:344: "gvisor" [31b285be-87cb-4c6e-8b9d-f9769882e17e] Running
gvisor_addon_test.go:92: (dbg) TestGvisorAddon: kubernetes.io/minikube-addons=gvisor healthy within 6.006565116s
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: waiting 4m0s for pods matching "run=nginx,runtime=gvisor" in namespace "default" ...
helpers_test.go:344: "nginx-gvisor" [eb9980c7-74a5-4508-8953-aa8a75ef06cd] Running / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
gvisor_addon_test.go:95: (dbg) TestGvisorAddon: run=nginx,runtime=gvisor healthy within 6.004794085s
helpers_test.go:175: Cleaning up "gvisor-957530" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-957530
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p gvisor-957530: (11.200564709s)
--- PASS: TestGvisorAddon (222.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (211.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-639879 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0703 23:00:44.338390   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 23:01:12.021400   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-639879 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m31.326605416s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (211.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-639879 -- rollout status deployment/busybox: (2.917524767s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-9xjdm -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-kbjws -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-n8hps -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-9xjdm -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-kbjws -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-n8hps -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-9xjdm -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-kbjws -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-n8hps -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-9xjdm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-9xjdm -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-kbjws -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-kbjws -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-n8hps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-639879 -- exec busybox-fc5497c4f-n8hps -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-639879 -v=7 --alsologtostderr
E0703 23:03:03.965502   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:03.970795   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:03.981025   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:04.001232   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:04.041481   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:04.121792   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:04.282204   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:04.602867   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:05.243840   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:06.524495   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:09.085607   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:14.206385   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:03:24.447579   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-639879 -v=7 --alsologtostderr: (52.144292583s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-639879 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp testdata/cp-test.txt ha-639879:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile714402818/001/cp-test_ha-639879.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879:/home/docker/cp-test.txt ha-639879-m02:/home/docker/cp-test_ha-639879_ha-639879-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m02 "sudo cat /home/docker/cp-test_ha-639879_ha-639879-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879:/home/docker/cp-test.txt ha-639879-m03:/home/docker/cp-test_ha-639879_ha-639879-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m03 "sudo cat /home/docker/cp-test_ha-639879_ha-639879-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879:/home/docker/cp-test.txt ha-639879-m04:/home/docker/cp-test_ha-639879_ha-639879-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m04 "sudo cat /home/docker/cp-test_ha-639879_ha-639879-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp testdata/cp-test.txt ha-639879-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile714402818/001/cp-test_ha-639879-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m02:/home/docker/cp-test.txt ha-639879:/home/docker/cp-test_ha-639879-m02_ha-639879.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879 "sudo cat /home/docker/cp-test_ha-639879-m02_ha-639879.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m02:/home/docker/cp-test.txt ha-639879-m03:/home/docker/cp-test_ha-639879-m02_ha-639879-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m03 "sudo cat /home/docker/cp-test_ha-639879-m02_ha-639879-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m02:/home/docker/cp-test.txt ha-639879-m04:/home/docker/cp-test_ha-639879-m02_ha-639879-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m04 "sudo cat /home/docker/cp-test_ha-639879-m02_ha-639879-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp testdata/cp-test.txt ha-639879-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile714402818/001/cp-test_ha-639879-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m03:/home/docker/cp-test.txt ha-639879:/home/docker/cp-test_ha-639879-m03_ha-639879.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879 "sudo cat /home/docker/cp-test_ha-639879-m03_ha-639879.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m03:/home/docker/cp-test.txt ha-639879-m02:/home/docker/cp-test_ha-639879-m03_ha-639879-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m02 "sudo cat /home/docker/cp-test_ha-639879-m03_ha-639879-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m03:/home/docker/cp-test.txt ha-639879-m04:/home/docker/cp-test_ha-639879-m03_ha-639879-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m04 "sudo cat /home/docker/cp-test_ha-639879-m03_ha-639879-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp testdata/cp-test.txt ha-639879-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile714402818/001/cp-test_ha-639879-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m04:/home/docker/cp-test.txt ha-639879:/home/docker/cp-test_ha-639879-m04_ha-639879.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879 "sudo cat /home/docker/cp-test_ha-639879-m04_ha-639879.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m04:/home/docker/cp-test.txt ha-639879-m02:/home/docker/cp-test_ha-639879-m04_ha-639879-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m02 "sudo cat /home/docker/cp-test_ha-639879-m04_ha-639879-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 cp ha-639879-m04:/home/docker/cp-test.txt ha-639879-m03:/home/docker/cp-test_ha-639879-m04_ha-639879-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 ssh -n ha-639879-m03 "sudo cat /home/docker/cp-test_ha-639879-m04_ha-639879-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 node stop m02 -v=7 --alsologtostderr
E0703 23:03:44.928098   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-639879 node stop m02 -v=7 --alsologtostderr: (13.314216527s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr: exit status 7 (617.653805ms)

                                                
                                                
-- stdout --
	ha-639879
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-639879-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-639879-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-639879-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:03:57.097769   30325 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:03:57.097981   30325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:03:57.097989   30325 out.go:304] Setting ErrFile to fd 2...
	I0703 23:03:57.097993   30325 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:03:57.098179   30325 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	I0703 23:03:57.098323   30325 out.go:298] Setting JSON to false
	I0703 23:03:57.098346   30325 mustload.go:65] Loading cluster: ha-639879
	I0703 23:03:57.098446   30325 notify.go:220] Checking for updates...
	I0703 23:03:57.098667   30325 config.go:182] Loaded profile config "ha-639879": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 23:03:57.098684   30325 status.go:174] checking status of ha-639879 ...
	I0703 23:03:57.099010   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.099066   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.118115   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34465
	I0703 23:03:57.118474   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.119164   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.119206   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.119622   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.120031   30325 main.go:141] libmachine: (ha-639879) Calling .GetState
	I0703 23:03:57.121849   30325 status.go:364] ha-639879 host status = "Running" (err=<nil>)
	I0703 23:03:57.121865   30325 host.go:66] Checking if "ha-639879" exists ...
	I0703 23:03:57.122279   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.122322   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.137203   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42489
	I0703 23:03:57.137607   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.138064   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.138084   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.138435   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.138618   30325 main.go:141] libmachine: (ha-639879) Calling .GetIP
	I0703 23:03:57.141474   30325 main.go:141] libmachine: (ha-639879) DBG | domain ha-639879 has defined MAC address 52:54:00:6c:e3:2a in network mk-ha-639879
	I0703 23:03:57.141923   30325 main.go:141] libmachine: (ha-639879) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:e3:2a", ip: ""} in network mk-ha-639879: {Iface:virbr1 ExpiryTime:2024-07-03 23:59:13 +0000 UTC Type:0 Mac:52:54:00:6c:e3:2a Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-639879 Clientid:01:52:54:00:6c:e3:2a}
	I0703 23:03:57.141954   30325 main.go:141] libmachine: (ha-639879) DBG | domain ha-639879 has defined IP address 192.168.39.219 and MAC address 52:54:00:6c:e3:2a in network mk-ha-639879
	I0703 23:03:57.142050   30325 host.go:66] Checking if "ha-639879" exists ...
	I0703 23:03:57.142368   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.142403   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.157040   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I0703 23:03:57.157380   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.157801   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.157819   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.158152   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.158327   30325 main.go:141] libmachine: (ha-639879) Calling .DriverName
	I0703 23:03:57.158513   30325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:03:57.158545   30325 main.go:141] libmachine: (ha-639879) Calling .GetSSHHostname
	I0703 23:03:57.161028   30325 main.go:141] libmachine: (ha-639879) DBG | domain ha-639879 has defined MAC address 52:54:00:6c:e3:2a in network mk-ha-639879
	I0703 23:03:57.161522   30325 main.go:141] libmachine: (ha-639879) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6c:e3:2a", ip: ""} in network mk-ha-639879: {Iface:virbr1 ExpiryTime:2024-07-03 23:59:13 +0000 UTC Type:0 Mac:52:54:00:6c:e3:2a Iaid: IPaddr:192.168.39.219 Prefix:24 Hostname:ha-639879 Clientid:01:52:54:00:6c:e3:2a}
	I0703 23:03:57.161549   30325 main.go:141] libmachine: (ha-639879) DBG | domain ha-639879 has defined IP address 192.168.39.219 and MAC address 52:54:00:6c:e3:2a in network mk-ha-639879
	I0703 23:03:57.161674   30325 main.go:141] libmachine: (ha-639879) Calling .GetSSHPort
	I0703 23:03:57.161846   30325 main.go:141] libmachine: (ha-639879) Calling .GetSSHKeyPath
	I0703 23:03:57.162006   30325 main.go:141] libmachine: (ha-639879) Calling .GetSSHUsername
	I0703 23:03:57.162149   30325 sshutil.go:53] new ssh client: &{IP:192.168.39.219 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/ha-639879/id_rsa Username:docker}
	I0703 23:03:57.242954   30325 ssh_runner.go:195] Run: systemctl --version
	I0703 23:03:57.250833   30325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:03:57.271955   30325 kubeconfig.go:125] found "ha-639879" server: "https://192.168.39.254:8443"
	I0703 23:03:57.271994   30325 api_server.go:166] Checking apiserver status ...
	I0703 23:03:57.272048   30325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:03:57.289758   30325 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1929/cgroup
	W0703 23:03:57.305846   30325 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1929/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 23:03:57.305898   30325 ssh_runner.go:195] Run: ls
	I0703 23:03:57.312137   30325 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0703 23:03:57.317462   30325 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0703 23:03:57.317495   30325 status.go:456] ha-639879 apiserver status = Running (err=<nil>)
	I0703 23:03:57.317508   30325 status.go:176] ha-639879 status: &{Name:ha-639879 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:03:57.317531   30325 status.go:174] checking status of ha-639879-m02 ...
	I0703 23:03:57.317881   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.317913   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.332700   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I0703 23:03:57.333052   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.333540   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.333566   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.333864   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.334051   30325 main.go:141] libmachine: (ha-639879-m02) Calling .GetState
	I0703 23:03:57.335685   30325 status.go:364] ha-639879-m02 host status = "Stopped" (err=<nil>)
	I0703 23:03:57.335702   30325 status.go:377] host is not running, skipping remaining checks
	I0703 23:03:57.335709   30325 status.go:176] ha-639879-m02 status: &{Name:ha-639879-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:03:57.335727   30325 status.go:174] checking status of ha-639879-m03 ...
	I0703 23:03:57.336013   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.336054   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.350845   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46133
	I0703 23:03:57.351158   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.351549   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.351568   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.351841   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.352020   30325 main.go:141] libmachine: (ha-639879-m03) Calling .GetState
	I0703 23:03:57.353648   30325 status.go:364] ha-639879-m03 host status = "Running" (err=<nil>)
	I0703 23:03:57.353662   30325 host.go:66] Checking if "ha-639879-m03" exists ...
	I0703 23:03:57.353945   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.353973   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.367537   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38377
	I0703 23:03:57.367863   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.368291   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.368308   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.368601   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.368858   30325 main.go:141] libmachine: (ha-639879-m03) Calling .GetIP
	I0703 23:03:57.371402   30325 main.go:141] libmachine: (ha-639879-m03) DBG | domain ha-639879-m03 has defined MAC address 52:54:00:38:1a:85 in network mk-ha-639879
	I0703 23:03:57.371824   30325 main.go:141] libmachine: (ha-639879-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:1a:85", ip: ""} in network mk-ha-639879: {Iface:virbr1 ExpiryTime:2024-07-04 00:01:27 +0000 UTC Type:0 Mac:52:54:00:38:1a:85 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-639879-m03 Clientid:01:52:54:00:38:1a:85}
	I0703 23:03:57.371848   30325 main.go:141] libmachine: (ha-639879-m03) DBG | domain ha-639879-m03 has defined IP address 192.168.39.56 and MAC address 52:54:00:38:1a:85 in network mk-ha-639879
	I0703 23:03:57.371987   30325 host.go:66] Checking if "ha-639879-m03" exists ...
	I0703 23:03:57.372282   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.372311   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.386609   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36835
	I0703 23:03:57.386914   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.387286   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.387302   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.387614   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.387757   30325 main.go:141] libmachine: (ha-639879-m03) Calling .DriverName
	I0703 23:03:57.387928   30325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:03:57.387953   30325 main.go:141] libmachine: (ha-639879-m03) Calling .GetSSHHostname
	I0703 23:03:57.390498   30325 main.go:141] libmachine: (ha-639879-m03) DBG | domain ha-639879-m03 has defined MAC address 52:54:00:38:1a:85 in network mk-ha-639879
	I0703 23:03:57.390914   30325 main.go:141] libmachine: (ha-639879-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:38:1a:85", ip: ""} in network mk-ha-639879: {Iface:virbr1 ExpiryTime:2024-07-04 00:01:27 +0000 UTC Type:0 Mac:52:54:00:38:1a:85 Iaid: IPaddr:192.168.39.56 Prefix:24 Hostname:ha-639879-m03 Clientid:01:52:54:00:38:1a:85}
	I0703 23:03:57.390961   30325 main.go:141] libmachine: (ha-639879-m03) DBG | domain ha-639879-m03 has defined IP address 192.168.39.56 and MAC address 52:54:00:38:1a:85 in network mk-ha-639879
	I0703 23:03:57.391076   30325 main.go:141] libmachine: (ha-639879-m03) Calling .GetSSHPort
	I0703 23:03:57.391215   30325 main.go:141] libmachine: (ha-639879-m03) Calling .GetSSHKeyPath
	I0703 23:03:57.391375   30325 main.go:141] libmachine: (ha-639879-m03) Calling .GetSSHUsername
	I0703 23:03:57.391523   30325 sshutil.go:53] new ssh client: &{IP:192.168.39.56 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/ha-639879-m03/id_rsa Username:docker}
	I0703 23:03:57.473696   30325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:03:57.489299   30325 kubeconfig.go:125] found "ha-639879" server: "https://192.168.39.254:8443"
	I0703 23:03:57.489320   30325 api_server.go:166] Checking apiserver status ...
	I0703 23:03:57.489345   30325 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:03:57.504150   30325 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1911/cgroup
	W0703 23:03:57.513864   30325 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1911/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 23:03:57.513896   30325 ssh_runner.go:195] Run: ls
	I0703 23:03:57.519053   30325 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0703 23:03:57.523468   30325 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0703 23:03:57.523483   30325 status.go:456] ha-639879-m03 apiserver status = Running (err=<nil>)
	I0703 23:03:57.523491   30325 status.go:176] ha-639879-m03 status: &{Name:ha-639879-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:03:57.523512   30325 status.go:174] checking status of ha-639879-m04 ...
	I0703 23:03:57.523806   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.523841   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.538234   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38183
	I0703 23:03:57.538637   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.539137   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.539167   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.539430   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.539611   30325 main.go:141] libmachine: (ha-639879-m04) Calling .GetState
	I0703 23:03:57.540847   30325 status.go:364] ha-639879-m04 host status = "Running" (err=<nil>)
	I0703 23:03:57.540860   30325 host.go:66] Checking if "ha-639879-m04" exists ...
	I0703 23:03:57.541177   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.541214   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.554827   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44831
	I0703 23:03:57.555159   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.555569   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.555587   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.555905   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.556057   30325 main.go:141] libmachine: (ha-639879-m04) Calling .GetIP
	I0703 23:03:57.558508   30325 main.go:141] libmachine: (ha-639879-m04) DBG | domain ha-639879-m04 has defined MAC address 52:54:00:63:99:ac in network mk-ha-639879
	I0703 23:03:57.558878   30325 main.go:141] libmachine: (ha-639879-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:99:ac", ip: ""} in network mk-ha-639879: {Iface:virbr1 ExpiryTime:2024-07-04 00:02:52 +0000 UTC Type:0 Mac:52:54:00:63:99:ac Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-639879-m04 Clientid:01:52:54:00:63:99:ac}
	I0703 23:03:57.558905   30325 main.go:141] libmachine: (ha-639879-m04) DBG | domain ha-639879-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:63:99:ac in network mk-ha-639879
	I0703 23:03:57.559009   30325 host.go:66] Checking if "ha-639879-m04" exists ...
	I0703 23:03:57.559287   30325 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:03:57.559325   30325 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:03:57.573358   30325 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40471
	I0703 23:03:57.573671   30325 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:03:57.574107   30325 main.go:141] libmachine: Using API Version  1
	I0703 23:03:57.574127   30325 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:03:57.574417   30325 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:03:57.574598   30325 main.go:141] libmachine: (ha-639879-m04) Calling .DriverName
	I0703 23:03:57.574758   30325 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:03:57.574780   30325 main.go:141] libmachine: (ha-639879-m04) Calling .GetSSHHostname
	I0703 23:03:57.577271   30325 main.go:141] libmachine: (ha-639879-m04) DBG | domain ha-639879-m04 has defined MAC address 52:54:00:63:99:ac in network mk-ha-639879
	I0703 23:03:57.577694   30325 main.go:141] libmachine: (ha-639879-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:99:ac", ip: ""} in network mk-ha-639879: {Iface:virbr1 ExpiryTime:2024-07-04 00:02:52 +0000 UTC Type:0 Mac:52:54:00:63:99:ac Iaid: IPaddr:192.168.39.110 Prefix:24 Hostname:ha-639879-m04 Clientid:01:52:54:00:63:99:ac}
	I0703 23:03:57.577717   30325 main.go:141] libmachine: (ha-639879-m04) DBG | domain ha-639879-m04 has defined IP address 192.168.39.110 and MAC address 52:54:00:63:99:ac in network mk-ha-639879
	I0703 23:03:57.577869   30325 main.go:141] libmachine: (ha-639879-m04) Calling .GetSSHPort
	I0703 23:03:57.578024   30325 main.go:141] libmachine: (ha-639879-m04) Calling .GetSSHKeyPath
	I0703 23:03:57.578161   30325 main.go:141] libmachine: (ha-639879-m04) Calling .GetSSHUsername
	I0703 23:03:57.578289   30325 sshutil.go:53] new ssh client: &{IP:192.168.39.110 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/ha-639879-m04/id_rsa Username:docker}
	I0703 23:03:57.660115   30325 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:03:57.675488   30325 status.go:176] ha-639879-m04 status: &{Name:ha-639879-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 node start m02 -v=7 --alsologtostderr
E0703 23:04:25.889008   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-639879 node start m02 -v=7 --alsologtostderr: (40.59674843s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (229.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-639879 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-639879 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-639879 -v=7 --alsologtostderr: (40.873993624s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-639879 --wait=true -v=7 --alsologtostderr
E0703 23:05:44.338912   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 23:05:47.809760   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:08:03.965736   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-639879 --wait=true -v=7 --alsologtostderr: (3m8.080545677s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-639879
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (229.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 node delete m03 -v=7 --alsologtostderr
E0703 23:08:31.651196   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-639879 node delete m03 -v=7 --alsologtostderr: (7.289193698s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (39.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-639879 stop -v=7 --alsologtostderr: (38.953713424s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr: exit status 7 (97.561135ms)

                                                
                                                
-- stdout --
	ha-639879
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-639879-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-639879-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:09:17.203640   32772 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:09:17.203739   32772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:09:17.203747   32772 out.go:304] Setting ErrFile to fd 2...
	I0703 23:09:17.203753   32772 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:09:17.203943   32772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	I0703 23:09:17.204139   32772 out.go:298] Setting JSON to false
	I0703 23:09:17.204166   32772 mustload.go:65] Loading cluster: ha-639879
	I0703 23:09:17.204356   32772 notify.go:220] Checking for updates...
	I0703 23:09:17.204653   32772 config.go:182] Loaded profile config "ha-639879": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 23:09:17.204672   32772 status.go:174] checking status of ha-639879 ...
	I0703 23:09:17.205241   32772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:09:17.205274   32772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:09:17.223860   32772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41579
	I0703 23:09:17.224338   32772 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:09:17.224895   32772 main.go:141] libmachine: Using API Version  1
	I0703 23:09:17.224913   32772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:09:17.225264   32772 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:09:17.225472   32772 main.go:141] libmachine: (ha-639879) Calling .GetState
	I0703 23:09:17.226925   32772 status.go:364] ha-639879 host status = "Stopped" (err=<nil>)
	I0703 23:09:17.226938   32772 status.go:377] host is not running, skipping remaining checks
	I0703 23:09:17.226945   32772 status.go:176] ha-639879 status: &{Name:ha-639879 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:09:17.226987   32772 status.go:174] checking status of ha-639879-m02 ...
	I0703 23:09:17.227258   32772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:09:17.227308   32772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:09:17.241248   32772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42419
	I0703 23:09:17.241584   32772 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:09:17.242023   32772 main.go:141] libmachine: Using API Version  1
	I0703 23:09:17.242047   32772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:09:17.242319   32772 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:09:17.242483   32772 main.go:141] libmachine: (ha-639879-m02) Calling .GetState
	I0703 23:09:17.243813   32772 status.go:364] ha-639879-m02 host status = "Stopped" (err=<nil>)
	I0703 23:09:17.243826   32772 status.go:377] host is not running, skipping remaining checks
	I0703 23:09:17.243831   32772 status.go:176] ha-639879-m02 status: &{Name:ha-639879-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:09:17.243853   32772 status.go:174] checking status of ha-639879-m04 ...
	I0703 23:09:17.244101   32772 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:09:17.244147   32772 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:09:17.257615   32772 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44643
	I0703 23:09:17.257956   32772 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:09:17.258547   32772 main.go:141] libmachine: Using API Version  1
	I0703 23:09:17.258568   32772 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:09:17.258929   32772 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:09:17.259092   32772 main.go:141] libmachine: (ha-639879-m04) Calling .GetState
	I0703 23:09:17.260523   32772 status.go:364] ha-639879-m04 host status = "Stopped" (err=<nil>)
	I0703 23:09:17.260539   32772 status.go:377] host is not running, skipping remaining checks
	I0703 23:09:17.260544   32772 status.go:176] ha-639879-m04 status: &{Name:ha-639879-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (39.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (156.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-639879 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0703 23:10:44.338463   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-639879 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m35.728195139s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (156.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (76.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-639879 --control-plane -v=7 --alsologtostderr
E0703 23:12:07.382354   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 23:13:03.965769   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-639879 --control-plane -v=7 --alsologtostderr: (1m15.253890135s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-639879 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (76.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (51.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-808843 --driver=kvm2 
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-808843 --driver=kvm2 : (51.840418123s)
--- PASS: TestImageBuild/serial/Setup (51.84s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-808843
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-808843: (1.651116187s)
--- PASS: TestImageBuild/serial/NormalBuild (1.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-808843
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.37s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-808843
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.37s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-808843
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.28s)

                                                
                                    
x
+
TestJSONOutput/start/Command (60.57s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-123337 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-123337 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m0.56798687s)
--- PASS: TestJSONOutput/start/Command (60.57s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-123337 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-123337 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.59s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-123337 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-123337 --output=json --user=testUser: (7.592264765s)
--- PASS: TestJSONOutput/stop/Command (7.59s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-950371 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-950371 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (56.917769ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"80489625-b023-49ad-a823-d4a8704a746f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-950371] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9516de27-b833-4a10-9ba7-80e02cf892d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18998"}}
	{"specversion":"1.0","id":"feed39e9-4f6e-4208-bcb5-1d78a6436a11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c6e5741e-5143-4b1e-9f8a-78c83c99cf2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig"}}
	{"specversion":"1.0","id":"03653e76-c45b-483d-810e-049eb8d0308c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube"}}
	{"specversion":"1.0","id":"bc3ce0c2-f999-481a-9e95-b0f41f2bc888","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"2b1aa3ab-bca5-43fe-a11f-3466da96628f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2dd35025-0bd1-4d34-83b1-fbb3f9d23ae3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-950371" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-950371
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (216.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-574486 --driver=kvm2 
E0703 23:15:44.337920   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 23:18:03.966394   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-574486 --driver=kvm2 : (2m46.024926529s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-577846 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-577846 --driver=kvm2 : (47.77546498s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-574486
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-577846
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-577846" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-577846
helpers_test.go:175: Cleaning up "first-574486" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-574486
--- PASS: TestMinikubeProfile (216.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.02s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-064116 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-064116 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (27.017572564s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.02s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-064116 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-064116 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.35s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (31.98s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-077004 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
E0703 23:19:27.011919   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-077004 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.98196766s)
--- PASS: TestMountStart/serial/StartWithMountSecond (31.98s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-077004 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-077004 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.38s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-064116 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-077004 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-077004 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.35s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-077004
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-077004: (2.264478286s)
--- PASS: TestMountStart/serial/Stop (2.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-077004
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-077004: (25.207314365s)
--- PASS: TestMountStart/serial/RestartStopped (26.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-077004 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-077004 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (116.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378997 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0703 23:20:44.338650   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-378997 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (1m55.781967087s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (116.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-378997 -- rollout status deployment/busybox: (2.696563068s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-b8lnh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-btpkq -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-b8lnh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-btpkq -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-b8lnh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-btpkq -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.21s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-b8lnh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-b8lnh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-btpkq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378997 -- exec busybox-fc5497c4f-btpkq -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-378997 -v 3 --alsologtostderr
E0703 23:23:03.966355   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-378997 -v 3 --alsologtostderr: (46.976590788s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.52s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-378997 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (6.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp testdata/cp-test.txt multinode-378997:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile80824515/001/cp-test_multinode-378997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997:/home/docker/cp-test.txt multinode-378997-m02:/home/docker/cp-test_multinode-378997_multinode-378997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m02 "sudo cat /home/docker/cp-test_multinode-378997_multinode-378997-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997:/home/docker/cp-test.txt multinode-378997-m03:/home/docker/cp-test_multinode-378997_multinode-378997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m03 "sudo cat /home/docker/cp-test_multinode-378997_multinode-378997-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp testdata/cp-test.txt multinode-378997-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile80824515/001/cp-test_multinode-378997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997-m02:/home/docker/cp-test.txt multinode-378997:/home/docker/cp-test_multinode-378997-m02_multinode-378997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997 "sudo cat /home/docker/cp-test_multinode-378997-m02_multinode-378997.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997-m02:/home/docker/cp-test.txt multinode-378997-m03:/home/docker/cp-test_multinode-378997-m02_multinode-378997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m03 "sudo cat /home/docker/cp-test_multinode-378997-m02_multinode-378997-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp testdata/cp-test.txt multinode-378997-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile80824515/001/cp-test_multinode-378997-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997-m03:/home/docker/cp-test.txt multinode-378997:/home/docker/cp-test_multinode-378997-m03_multinode-378997.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997 "sudo cat /home/docker/cp-test_multinode-378997-m03_multinode-378997.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 cp multinode-378997-m03:/home/docker/cp-test.txt multinode-378997-m02:/home/docker/cp-test_multinode-378997-m03_multinode-378997-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 ssh -n multinode-378997-m02 "sudo cat /home/docker/cp-test_multinode-378997-m03_multinode-378997-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (6.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-378997 node stop m03: (2.499440983s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-378997 status: exit status 7 (427.462748ms)

                                                
                                                
-- stdout --
	multinode-378997
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-378997-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-378997-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-378997 status --alsologtostderr: exit status 7 (417.065486ms)

                                                
                                                
-- stdout --
	multinode-378997
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-378997-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-378997-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:23:27.748168   41947 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:23:27.748384   41947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:23:27.748392   41947 out.go:304] Setting ErrFile to fd 2...
	I0703 23:23:27.748396   41947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:23:27.748583   41947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	I0703 23:23:27.748736   41947 out.go:298] Setting JSON to false
	I0703 23:23:27.748758   41947 mustload.go:65] Loading cluster: multinode-378997
	I0703 23:23:27.748793   41947 notify.go:220] Checking for updates...
	I0703 23:23:27.749079   41947 config.go:182] Loaded profile config "multinode-378997": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 23:23:27.749094   41947 status.go:174] checking status of multinode-378997 ...
	I0703 23:23:27.749481   41947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:23:27.749551   41947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:23:27.770977   41947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0703 23:23:27.771344   41947 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:23:27.771839   41947 main.go:141] libmachine: Using API Version  1
	I0703 23:23:27.771878   41947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:23:27.772181   41947 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:23:27.772344   41947 main.go:141] libmachine: (multinode-378997) Calling .GetState
	I0703 23:23:27.773883   41947 status.go:364] multinode-378997 host status = "Running" (err=<nil>)
	I0703 23:23:27.773908   41947 host.go:66] Checking if "multinode-378997" exists ...
	I0703 23:23:27.774302   41947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:23:27.774342   41947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:23:27.788577   41947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37559
	I0703 23:23:27.788907   41947 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:23:27.789444   41947 main.go:141] libmachine: Using API Version  1
	I0703 23:23:27.789473   41947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:23:27.789751   41947 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:23:27.789943   41947 main.go:141] libmachine: (multinode-378997) Calling .GetIP
	I0703 23:23:27.792454   41947 main.go:141] libmachine: (multinode-378997) DBG | domain multinode-378997 has defined MAC address 52:54:00:b8:51:c5 in network mk-multinode-378997
	I0703 23:23:27.792840   41947 main.go:141] libmachine: (multinode-378997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:51:c5", ip: ""} in network mk-multinode-378997: {Iface:virbr1 ExpiryTime:2024-07-04 00:20:42 +0000 UTC Type:0 Mac:52:54:00:b8:51:c5 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-378997 Clientid:01:52:54:00:b8:51:c5}
	I0703 23:23:27.792867   41947 main.go:141] libmachine: (multinode-378997) DBG | domain multinode-378997 has defined IP address 192.168.39.180 and MAC address 52:54:00:b8:51:c5 in network mk-multinode-378997
	I0703 23:23:27.792988   41947 host.go:66] Checking if "multinode-378997" exists ...
	I0703 23:23:27.793362   41947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:23:27.793417   41947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:23:27.807263   41947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37713
	I0703 23:23:27.807614   41947 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:23:27.808031   41947 main.go:141] libmachine: Using API Version  1
	I0703 23:23:27.808051   41947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:23:27.808311   41947 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:23:27.808478   41947 main.go:141] libmachine: (multinode-378997) Calling .DriverName
	I0703 23:23:27.808645   41947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:23:27.808684   41947 main.go:141] libmachine: (multinode-378997) Calling .GetSSHHostname
	I0703 23:23:27.811161   41947 main.go:141] libmachine: (multinode-378997) DBG | domain multinode-378997 has defined MAC address 52:54:00:b8:51:c5 in network mk-multinode-378997
	I0703 23:23:27.811568   41947 main.go:141] libmachine: (multinode-378997) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b8:51:c5", ip: ""} in network mk-multinode-378997: {Iface:virbr1 ExpiryTime:2024-07-04 00:20:42 +0000 UTC Type:0 Mac:52:54:00:b8:51:c5 Iaid: IPaddr:192.168.39.180 Prefix:24 Hostname:multinode-378997 Clientid:01:52:54:00:b8:51:c5}
	I0703 23:23:27.811602   41947 main.go:141] libmachine: (multinode-378997) DBG | domain multinode-378997 has defined IP address 192.168.39.180 and MAC address 52:54:00:b8:51:c5 in network mk-multinode-378997
	I0703 23:23:27.811747   41947 main.go:141] libmachine: (multinode-378997) Calling .GetSSHPort
	I0703 23:23:27.811893   41947 main.go:141] libmachine: (multinode-378997) Calling .GetSSHKeyPath
	I0703 23:23:27.812041   41947 main.go:141] libmachine: (multinode-378997) Calling .GetSSHUsername
	I0703 23:23:27.812195   41947 sshutil.go:53] new ssh client: &{IP:192.168.39.180 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/multinode-378997/id_rsa Username:docker}
	I0703 23:23:27.892840   41947 ssh_runner.go:195] Run: systemctl --version
	I0703 23:23:27.899233   41947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:23:27.916481   41947 kubeconfig.go:125] found "multinode-378997" server: "https://192.168.39.180:8443"
	I0703 23:23:27.916511   41947 api_server.go:166] Checking apiserver status ...
	I0703 23:23:27.916552   41947 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0703 23:23:27.932528   41947 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1865/cgroup
	W0703 23:23:27.943817   41947 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1865/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0703 23:23:27.943882   41947 ssh_runner.go:195] Run: ls
	I0703 23:23:27.948518   41947 api_server.go:253] Checking apiserver healthz at https://192.168.39.180:8443/healthz ...
	I0703 23:23:27.953520   41947 api_server.go:279] https://192.168.39.180:8443/healthz returned 200:
	ok
	I0703 23:23:27.953544   41947 status.go:456] multinode-378997 apiserver status = Running (err=<nil>)
	I0703 23:23:27.953554   41947 status.go:176] multinode-378997 status: &{Name:multinode-378997 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:23:27.953572   41947 status.go:174] checking status of multinode-378997-m02 ...
	I0703 23:23:27.953915   41947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:23:27.953949   41947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:23:27.968626   41947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44591
	I0703 23:23:27.969030   41947 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:23:27.969484   41947 main.go:141] libmachine: Using API Version  1
	I0703 23:23:27.969506   41947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:23:27.969843   41947 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:23:27.970062   41947 main.go:141] libmachine: (multinode-378997-m02) Calling .GetState
	I0703 23:23:27.971628   41947 status.go:364] multinode-378997-m02 host status = "Running" (err=<nil>)
	I0703 23:23:27.971645   41947 host.go:66] Checking if "multinode-378997-m02" exists ...
	I0703 23:23:27.971911   41947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:23:27.971941   41947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:23:27.986332   41947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36621
	I0703 23:23:27.986673   41947 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:23:27.987069   41947 main.go:141] libmachine: Using API Version  1
	I0703 23:23:27.987087   41947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:23:27.987394   41947 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:23:27.987573   41947 main.go:141] libmachine: (multinode-378997-m02) Calling .GetIP
	I0703 23:23:27.990127   41947 main.go:141] libmachine: (multinode-378997-m02) DBG | domain multinode-378997-m02 has defined MAC address 52:54:00:8c:55:1e in network mk-multinode-378997
	I0703 23:23:27.990472   41947 main.go:141] libmachine: (multinode-378997-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:55:1e", ip: ""} in network mk-multinode-378997: {Iface:virbr1 ExpiryTime:2024-07-04 00:21:54 +0000 UTC Type:0 Mac:52:54:00:8c:55:1e Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-378997-m02 Clientid:01:52:54:00:8c:55:1e}
	I0703 23:23:27.990500   41947 main.go:141] libmachine: (multinode-378997-m02) DBG | domain multinode-378997-m02 has defined IP address 192.168.39.77 and MAC address 52:54:00:8c:55:1e in network mk-multinode-378997
	I0703 23:23:27.990641   41947 host.go:66] Checking if "multinode-378997-m02" exists ...
	I0703 23:23:27.990975   41947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:23:27.991014   41947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:23:28.006727   41947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40541
	I0703 23:23:28.007166   41947 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:23:28.007588   41947 main.go:141] libmachine: Using API Version  1
	I0703 23:23:28.007610   41947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:23:28.007925   41947 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:23:28.008054   41947 main.go:141] libmachine: (multinode-378997-m02) Calling .DriverName
	I0703 23:23:28.008211   41947 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0703 23:23:28.008230   41947 main.go:141] libmachine: (multinode-378997-m02) Calling .GetSSHHostname
	I0703 23:23:28.010531   41947 main.go:141] libmachine: (multinode-378997-m02) DBG | domain multinode-378997-m02 has defined MAC address 52:54:00:8c:55:1e in network mk-multinode-378997
	I0703 23:23:28.010844   41947 main.go:141] libmachine: (multinode-378997-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:55:1e", ip: ""} in network mk-multinode-378997: {Iface:virbr1 ExpiryTime:2024-07-04 00:21:54 +0000 UTC Type:0 Mac:52:54:00:8c:55:1e Iaid: IPaddr:192.168.39.77 Prefix:24 Hostname:multinode-378997-m02 Clientid:01:52:54:00:8c:55:1e}
	I0703 23:23:28.010871   41947 main.go:141] libmachine: (multinode-378997-m02) DBG | domain multinode-378997-m02 has defined IP address 192.168.39.77 and MAC address 52:54:00:8c:55:1e in network mk-multinode-378997
	I0703 23:23:28.011007   41947 main.go:141] libmachine: (multinode-378997-m02) Calling .GetSSHPort
	I0703 23:23:28.011172   41947 main.go:141] libmachine: (multinode-378997-m02) Calling .GetSSHKeyPath
	I0703 23:23:28.011302   41947 main.go:141] libmachine: (multinode-378997-m02) Calling .GetSSHUsername
	I0703 23:23:28.011438   41947 sshutil.go:53] new ssh client: &{IP:192.168.39.77 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18998-9391/.minikube/machines/multinode-378997-m02/id_rsa Username:docker}
	I0703 23:23:28.092559   41947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0703 23:23:28.107272   41947 status.go:176] multinode-378997-m02 status: &{Name:multinode-378997-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:23:28.107299   41947 status.go:174] checking status of multinode-378997-m03 ...
	I0703 23:23:28.107573   41947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:23:28.107610   41947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:23:28.122356   41947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37839
	I0703 23:23:28.122822   41947 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:23:28.123358   41947 main.go:141] libmachine: Using API Version  1
	I0703 23:23:28.123389   41947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:23:28.123686   41947 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:23:28.123852   41947 main.go:141] libmachine: (multinode-378997-m03) Calling .GetState
	I0703 23:23:28.125338   41947 status.go:364] multinode-378997-m03 host status = "Stopped" (err=<nil>)
	I0703 23:23:28.125350   41947 status.go:377] host is not running, skipping remaining checks
	I0703 23:23:28.125354   41947 status.go:176] multinode-378997-m03 status: &{Name:multinode-378997-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-378997 node start m03 -v=7 --alsologtostderr: (31.583869405s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (151.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-378997
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-378997
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-378997: (28.194606753s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378997 --wait=true -v=8 --alsologtostderr
E0703 23:25:44.338423   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-378997 --wait=true -v=8 --alsologtostderr: (2m3.44567134s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-378997
--- PASS: TestMultiNode/serial/RestartKeepsNodes (151.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-378997 node delete m03: (1.823884953s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-378997 stop: (25.682583444s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-378997 status: exit status 7 (81.715974ms)

                                                
                                                
-- stdout --
	multinode-378997
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-378997-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-378997 status --alsologtostderr: exit status 7 (76.475297ms)

                                                
                                                
-- stdout --
	multinode-378997
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-378997-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0703 23:27:00.166791   43584 out.go:291] Setting OutFile to fd 1 ...
	I0703 23:27:00.167041   43584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:27:00.167051   43584 out.go:304] Setting ErrFile to fd 2...
	I0703 23:27:00.167057   43584 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0703 23:27:00.167219   43584 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18998-9391/.minikube/bin
	I0703 23:27:00.167376   43584 out.go:298] Setting JSON to false
	I0703 23:27:00.167406   43584 mustload.go:65] Loading cluster: multinode-378997
	I0703 23:27:00.167500   43584 notify.go:220] Checking for updates...
	I0703 23:27:00.167794   43584 config.go:182] Loaded profile config "multinode-378997": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
	I0703 23:27:00.167813   43584 status.go:174] checking status of multinode-378997 ...
	I0703 23:27:00.168148   43584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:27:00.168220   43584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:27:00.185989   43584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41287
	I0703 23:27:00.186432   43584 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:27:00.186974   43584 main.go:141] libmachine: Using API Version  1
	I0703 23:27:00.186993   43584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:27:00.187299   43584 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:27:00.187488   43584 main.go:141] libmachine: (multinode-378997) Calling .GetState
	I0703 23:27:00.188962   43584 status.go:364] multinode-378997 host status = "Stopped" (err=<nil>)
	I0703 23:27:00.188978   43584 status.go:377] host is not running, skipping remaining checks
	I0703 23:27:00.188983   43584 status.go:176] multinode-378997 status: &{Name:multinode-378997 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0703 23:27:00.189021   43584 status.go:174] checking status of multinode-378997-m02 ...
	I0703 23:27:00.189461   43584 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0703 23:27:00.189507   43584 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0703 23:27:00.203226   43584 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35265
	I0703 23:27:00.203511   43584 main.go:141] libmachine: () Calling .GetVersion
	I0703 23:27:00.203900   43584 main.go:141] libmachine: Using API Version  1
	I0703 23:27:00.203922   43584 main.go:141] libmachine: () Calling .SetConfigRaw
	I0703 23:27:00.204182   43584 main.go:141] libmachine: () Calling .GetMachineName
	I0703 23:27:00.204331   43584 main.go:141] libmachine: (multinode-378997-m02) Calling .GetState
	I0703 23:27:00.205732   43584 status.go:364] multinode-378997-m02 host status = "Stopped" (err=<nil>)
	I0703 23:27:00.205743   43584 status.go:377] host is not running, skipping remaining checks
	I0703 23:27:00.205748   43584 status.go:176] multinode-378997-m02 status: &{Name:multinode-378997-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.84s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (90.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378997 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0703 23:28:03.965406   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-378997 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m29.848623066s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378997 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (90.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-378997
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378997-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-378997-m02 --driver=kvm2 : exit status 14 (55.592258ms)

                                                
                                                
-- stdout --
	* [multinode-378997-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-378997-m02' is duplicated with machine name 'multinode-378997-m02' in profile 'multinode-378997'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378997-m03 --driver=kvm2 
E0703 23:28:47.385011   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-378997-m03 --driver=kvm2 : (50.666091865s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-378997
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-378997: exit status 80 (194.506784ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-378997 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-378997-m03 already exists in multinode-378997-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-378997-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.92s)

                                                
                                    
x
+
TestPreload (172.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-073792 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0703 23:30:44.338805   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-073792 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (1m44.841101933s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-073792 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-073792 image pull gcr.io/k8s-minikube/busybox: (1.268186365s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-073792
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-073792: (12.605781617s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-073792 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-073792 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (53.067356931s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-073792 image list
helpers_test.go:175: Cleaning up "test-preload-073792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-073792
--- PASS: TestPreload (172.79s)

                                                
                                    
x
+
TestScheduledStopUnix (235.48s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-479127 --memory=2048 --driver=kvm2 
E0703 23:33:03.965921   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-479127 --memory=2048 --driver=kvm2 : (2m43.971780381s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-479127 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-479127 -n scheduled-stop-479127
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-479127 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0703 23:35:01.262984   16676 retry.go:31] will retry after 72.819µs: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.264164   16676 retry.go:31] will retry after 192.637µs: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.265272   16676 retry.go:31] will retry after 144.6µs: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.266428   16676 retry.go:31] will retry after 372.197µs: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.267550   16676 retry.go:31] will retry after 304.276µs: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.268678   16676 retry.go:31] will retry after 1.117772ms: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.270891   16676 retry.go:31] will retry after 591.204µs: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.272039   16676 retry.go:31] will retry after 2.12874ms: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.275247   16676 retry.go:31] will retry after 1.819259ms: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.277489   16676 retry.go:31] will retry after 4.53635ms: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.282683   16676 retry.go:31] will retry after 5.292073ms: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.288883   16676 retry.go:31] will retry after 8.759611ms: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.298090   16676 retry.go:31] will retry after 13.61039ms: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
I0703 23:35:01.312321   16676 retry.go:31] will retry after 26.533994ms: open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/scheduled-stop-479127/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-479127 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-479127 -n scheduled-stop-479127
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-479127
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-479127 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0703 23:35:44.338906   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 23:36:07.014117   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-479127
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-479127: exit status 7 (63.628497ms)

                                                
                                                
-- stdout --
	scheduled-stop-479127
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-479127 -n scheduled-stop-479127
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-479127 -n scheduled-stop-479127: exit status 7 (58.811474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-479127" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-479127
--- PASS: TestScheduledStopUnix (235.48s)

                                                
                                    
x
+
TestSkaffold (141.12s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe984978204 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-678928 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-678928 --memory=2600 --driver=kvm2 : (50.836444819s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe984978204 run --minikube-profile skaffold-678928 --kube-context skaffold-678928 --status-check=true --port-forward=false --interactive=false
E0703 23:38:03.965381   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe984978204 run --minikube-profile skaffold-678928 --kube-context skaffold-678928 --status-check=true --port-forward=false --interactive=false: (1m17.182193983s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-578cf4b769-72pkg" [a45333c1-191f-4ae1-9ad3-98b2b7bf376f] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003427374s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-cbf57b69c-tvrj5" [3f4c812d-5c68-45d7-9f8d-b499a182dc52] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003559324s
helpers_test.go:175: Cleaning up "skaffold-678928" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-678928
--- PASS: TestSkaffold (141.12s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (209.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.9108369 start -p running-upgrade-208199 --memory=2200 --vm-driver=kvm2 
E0703 23:44:43.515608   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.9108369 start -p running-upgrade-208199 --memory=2200 --vm-driver=kvm2 : (2m16.845528788s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-208199 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-208199 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m11.194475671s)
helpers_test.go:175: Cleaning up "running-upgrade-208199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-208199
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-208199: (1.162179515s)
--- PASS: TestRunningBinaryUpgrade (209.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (213.67s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-978850 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-978850 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m29.561095806s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-978850
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-978850: (3.297537561s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-978850 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-978850 status --format={{.Host}}: exit status 7 (76.886953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-978850 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-978850 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 : (1m4.015762276s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-978850 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-978850 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-978850 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (74.984297ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-978850] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-978850
	    minikube start -p kubernetes-upgrade-978850 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-9788502 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.2, by running:
	    
	    minikube start -p kubernetes-upgrade-978850 --kubernetes-version=v1.30.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-978850 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-978850 --memory=2200 --kubernetes-version=v1.30.2 --alsologtostderr -v=1 --driver=kvm2 : (54.885829372s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-978850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-978850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-978850: (1.703780588s)
--- PASS: TestKubernetesUpgrade (213.67s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (209.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3048549630 start -p stopped-upgrade-978871 --memory=2200 --vm-driver=kvm2 
I0703 23:38:35.664441   16676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0703 23:38:37.484516   16676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0703 23:38:37.511706   16676 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0703 23:38:37.511732   16676 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0703 23:38:37.511790   16676 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0703 23:38:37.511818   16676 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3464162070/002/docker-machine-driver-kvm2
I0703 23:38:37.566460   16676 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3464162070/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0 0x437dfc0] Decompressors:map[bz2:0xc0004ccd10 gz:0xc0004ccd18 tar:0xc0004cc8b0 tar.bz2:0xc0004cc8c0 tar.gz:0xc0004cc8d0 tar.xz:0xc0004cc8e0 tar.zst:0xc0004cc8f0 tbz2:0xc0004cc8c0 tgz:0xc0004cc8d0 txz:0xc0004cc8e0 tzst:0xc0004cc8f0 xz:0xc0004ccd20 zip:0xc0004ccd30 zst:0xc0004ccd28] Getters:map[file:0xc001d99800 http:0xc0007febe0 https:0xc0007fec30] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0703 23:38:37.566498   16676 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3464162070/002/docker-machine-driver-kvm2
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3048549630 start -p stopped-upgrade-978871 --memory=2200 --vm-driver=kvm2 : (2m22.002569211s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3048549630 -p stopped-upgrade-978871 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3048549630 -p stopped-upgrade-978871 stop: (4.166197063s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-978871 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-978871 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m3.384905358s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (209.55s)

                                                
                                    
x
+
TestPause/serial/Start (161.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-130545 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-130545 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (2m41.749471775s)
--- PASS: TestPause/serial/Start (161.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-482644 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-482644 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (57.546605ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-482644] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18998
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18998-9391/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18998-9391/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (81.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-482644 --driver=kvm2 
E0703 23:40:44.338246   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-482644 --driver=kvm2 : (1m21.397239399s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-482644 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (81.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (59.61s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-130545 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-130545 --alsologtostderr -v=1 --driver=kvm2 : (59.586810902s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (59.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-482644 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-482644 --no-kubernetes --driver=kvm2 : (16.387243244s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-482644 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-482644 status -o json: exit status 2 (287.898239ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-482644","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-482644
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-482644: (1.085410964s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-482644 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-482644 --no-kubernetes --driver=kvm2 : (29.080755803s)
--- PASS: TestNoKubernetes/serial/Start (29.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-978871
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-978871: (1.445322444s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.45s)

                                                
                                    
x
+
TestPause/serial/Pause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-130545 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.54s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.22s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-130545 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-130545 --output=json --layout=cluster: exit status 2 (219.744696ms)

                                                
                                                
-- stdout --
	{"Name":"pause-130545","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-130545","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.22s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.49s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-130545 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.49s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-130545 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (0.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-130545 --alsologtostderr -v=5
--- PASS: TestPause/serial/DeletePaused (0.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-482644 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-482644 "sudo systemctl is-active --quiet service kubelet": exit status 1 (196.946411ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-482644
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-482644: (2.348245029s)
--- PASS: TestNoKubernetes/serial/Stop (2.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (97.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-482644 --driver=kvm2 
E0703 23:43:03.966150   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-482644 --driver=kvm2 : (1m37.840340917s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (97.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-482644 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-482644 "sudo systemctl is-active --quiet service kubelet": exit status 1 (188.706941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (168.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-231081 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
E0703 23:46:05.436579   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-231081 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m48.517679294s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (168.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (116.72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-368149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-368149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2: (1m56.718334105s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (116.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (121.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-195213 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-195213 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2: (2m1.333530258s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (121.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-231081 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b91fc2c1-e3c2-41e9-b709-ad02825b62ff] Pending
helpers_test.go:344: "busybox" [b91fc2c1-e3c2-41e9-b709-ad02825b62ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b91fc2c1-e3c2-41e9-b709-ad02825b62ff] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004316864s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-231081 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.79s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-698754 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-698754 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2: (1m47.790049363s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (107.79s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-231081 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-231081 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-231081 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-231081 --alsologtostderr -v=3: (13.377789356s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-231081 -n old-k8s-version-231081
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-231081 -n old-k8s-version-231081: exit status 7 (63.823182ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-231081 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (410.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-231081 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-231081 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m49.919574179s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-231081 -n old-k8s-version-231081
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (410.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-368149 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [44afb0c3-00a7-4b0b-b24b-37e87ce5d6c4] Pending
helpers_test.go:344: "busybox" [44afb0c3-00a7-4b0b-b24b-37e87ce5d6c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [44afb0c3-00a7-4b0b-b24b-37e87ce5d6c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.00409158s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-368149 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-368149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-368149 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.20277172s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-368149 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-368149 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-368149 --alsologtostderr -v=3: (13.391293257s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-195213 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2cd0978d-1109-4c7f-b4d8-3af04fa5aaca] Pending
helpers_test.go:344: "busybox" [2cd0978d-1109-4c7f-b4d8-3af04fa5aaca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2cd0978d-1109-4c7f-b4d8-3af04fa5aaca] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0046752s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-195213 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368149 -n no-preload-368149
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368149 -n no-preload-368149: exit status 7 (74.773771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-368149 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (324.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-368149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-368149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.30.2: (5m24.604176445s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-368149 -n no-preload-368149
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (324.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-195213 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-195213 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-195213 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-195213 --alsologtostderr -v=3: (13.320906039s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-195213 -n embed-certs-195213
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-195213 -n embed-certs-195213: exit status 7 (83.818985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-195213 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (305.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-195213 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2
E0703 23:50:44.338412   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 23:50:47.410411   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
E0703 23:50:47.415960   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
E0703 23:50:47.426725   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-195213 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.30.2: (5m5.500098635s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-195213 -n embed-certs-195213
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (305.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-698754 create -f testdata/busybox.yaml
E0703 23:50:47.447501   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
E0703 23:50:47.487744   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
E0703 23:50:47.568237   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1beb4f5b-6592-4b82-bfa3-1a8d6bc16a9c] Pending
E0703 23:50:47.728788   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
E0703 23:50:48.049517   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1beb4f5b-6592-4b82-bfa3-1a8d6bc16a9c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0703 23:50:48.690610   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
E0703 23:50:49.971727   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
helpers_test.go:344: "busybox" [1beb4f5b-6592-4b82-bfa3-1a8d6bc16a9c] Running
E0703 23:50:52.532549   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005395355s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-698754 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-698754 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0703 23:50:57.653608   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-698754 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-698754 --alsologtostderr -v=3
E0703 23:51:07.894213   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-698754 --alsologtostderr -v=3: (13.340721164s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754: exit status 7 (73.681786ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-698754 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (320.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-698754 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2
E0703 23:51:28.374373   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
E0703 23:52:09.334534   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
E0703 23:52:47.015017   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:53:03.966205   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/functional-377836/client.crt: no such file or directory
E0703 23:53:21.593138   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
E0703 23:53:31.255261   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-698754 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.30.2: (5m20.708334634s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (320.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-j2g7s" [3a8e282f-5e90-4f4c-a340-04c0ebc21ddd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004387781s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-xkphp" [943633b8-2c76-467c-859d-0f89f4656a38] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004472051s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-j2g7s" [3a8e282f-5e90-4f4c-a340-04c0ebc21ddd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00486999s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-195213 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-xkphp" [943633b8-2c76-467c-859d-0f89f4656a38] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006791261s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-368149 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-195213 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-195213 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-195213 -n embed-certs-195213
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-195213 -n embed-certs-195213: exit status 2 (261.102506ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-195213 -n embed-certs-195213
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-195213 -n embed-certs-195213: exit status 2 (251.069832ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-195213 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-195213 -n embed-certs-195213
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-195213 -n embed-certs-195213
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-368149 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-368149 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368149 -n no-preload-368149
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368149 -n no-preload-368149: exit status 2 (263.209051ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-368149 -n no-preload-368149
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-368149 -n no-preload-368149: exit status 2 (264.550843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-368149 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-368149 -n no-preload-368149
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-368149 -n no-preload-368149
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (76.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-694574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-694574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2: (1m16.445929379s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (76.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (100.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0703 23:55:44.338157   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/addons-765846/client.crt: no such file or directory
E0703 23:55:47.410342   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m40.159604496s)
--- PASS: TestNetworkPlugins/group/auto/Start (100.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lrstr" [3c51a7d6-0cb0-4054-a9ff-956391d647ac] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003716161s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-lrstr" [3c51a7d6-0cb0-4054-a9ff-956391d647ac] Running
E0703 23:56:15.095800   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/gvisor-957530/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005090059s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-231081 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-231081 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-231081 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-231081 -n old-k8s-version-231081
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-231081 -n old-k8s-version-231081: exit status 2 (243.316856ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-231081 -n old-k8s-version-231081
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-231081 -n old-k8s-version-231081: exit status 2 (255.807472ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-231081 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-231081 -n old-k8s-version-231081
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-231081 -n old-k8s-version-231081
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (80.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m20.190571539s)
--- PASS: TestNetworkPlugins/group/flannel/Start (80.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rs92k" [053be2ff-29bf-4f72-b791-07a13fead9e6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rs92k" [053be2ff-29bf-4f72-b791-07a13fead9e6] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.004926881s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (16.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-694574 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-694574 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-694574 --alsologtostderr -v=3: (8.388173521s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-rs92k" [053be2ff-29bf-4f72-b791-07a13fead9e6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004725952s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-698754 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-698754 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-698754 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754: exit status 2 (352.595992ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754: exit status 2 (290.234023ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-698754 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-698754 -n default-k8s-diff-port-698754
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-694574 -n newest-cni-694574
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-694574 -n newest-cni-694574: exit status 7 (81.620442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-694574 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (40.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-694574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-694574 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.30.2: (39.872355462s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-694574 -n newest-cni-694574
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (40.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (92.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m32.264575028s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (92.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-909477 "pgrep -a kubelet"
I0703 23:57:09.853996   16676 config.go:182] Loaded profile config "auto-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-909477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7vq4m" [a85aafe8-c389-46bb-b959-1f7a5e55f715] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7vq4m" [a85aafe8-c389-46bb-b959-1f7a5e55f715] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004462633s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (21.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-909477 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-909477 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.186609406s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0703 23:57:36.279464   16676 retry.go:31] will retry after 897.349067ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context auto-909477 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-909477 exec deployment/netcat -- nslookup kubernetes.default: (5.161712162s)
--- PASS: TestNetworkPlugins/group/auto/DNS (21.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-694574 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/gvisor-addon:2
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-694574 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-694574 -n newest-cni-694574
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-694574 -n newest-cni-694574: exit status 2 (253.277614ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-694574 -n newest-cni-694574
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-694574 -n newest-cni-694574: exit status 2 (241.930292ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-694574 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-694574 -n newest-cni-694574
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-694574 -n newest-cni-694574
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.48s)
E0704 00:01:08.143362   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/default-k8s-diff-port-698754/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m19.996552796s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-l9wwq" [b07c8c7f-8e07-4e5c-9cf5-b5b40c18ef97] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006013701s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-909477 "pgrep -a kubelet"
I0703 23:57:48.530317   16676 config.go:182] Loaded profile config "flannel-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (14.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-909477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kdkcl" [4eabcfbd-5aac-4e67-b882-cad65e77ed78] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kdkcl" [4eabcfbd-5aac-4e67-b882-cad65e77ed78] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 14.004931262s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (14.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (88.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m28.743521056s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (88.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-909477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (123.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
E0703 23:58:21.593729   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/skaffold-678928/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (2m3.362661262s)
--- PASS: TestNetworkPlugins/group/calico/Start (123.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-909477 "pgrep -a kubelet"
I0703 23:58:30.580443   16676 config.go:182] Loaded profile config "enable-default-cni-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-909477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-47svh" [0b872f8e-db66-49c5-a3db-e2be7f1f6906] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-47svh" [0b872f8e-db66-49c5-a3db-e2be7f1f6906] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.00406725s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-909477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-909477 "pgrep -a kubelet"
I0703 23:58:58.805868   16676 config.go:182] Loaded profile config "bridge-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-909477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-k4fwt" [4332189b-8eb8-47be-87a8-b28be131ab28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0703 23:59:03.151562   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/old-k8s-version-231081/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-k4fwt" [4332189b-8eb8-47be-87a8-b28be131ab28] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003005398s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m34.715192568s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-909477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (101.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
E0703 23:59:27.660419   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/no-preload-368149/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m41.579289415s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (101.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-909477 "pgrep -a kubelet"
I0703 23:59:28.252209   16676 config.go:182] Loaded profile config "kubenet-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-909477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-8q4zp" [12636ff5-6391-4299-b75e-9786cd57a3f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0703 23:59:30.220689   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/no-preload-368149/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-8q4zp" [12636ff5-6391-4299-b75e-9786cd57a3f3] Running
E0703 23:59:33.873003   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/old-k8s-version-231081/client.crt: no such file or directory
E0703 23:59:35.341521   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/no-preload-368149/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004383428s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-909477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (90.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
E0704 00:00:06.063294   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/no-preload-368149/client.crt: no such file or directory
E0704 00:00:14.834187   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/old-k8s-version-231081/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-909477 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m30.016376892s)
--- PASS: TestNetworkPlugins/group/false/Start (90.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-5vwpw" [77224321-908c-4687-8a04-a1d0b4f83d47] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005856221s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-909477 "pgrep -a kubelet"
I0704 00:00:29.882173   16676 config.go:182] Loaded profile config "calico-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-909477 replace --force -f testdata/netcat-deployment.yaml
I0704 00:00:30.184401   16676 kapi.go:170] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cr9np" [778a8ccc-8dc5-471f-ad1e-f1d19a03f500] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cr9np" [778a8ccc-8dc5-471f-ad1e-f1d19a03f500] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004930445s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-flmgx" [80e3cd88-28e5-464f-9c48-b1f71c15b20e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.006383589s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-909477 "pgrep -a kubelet"
I0704 00:00:39.878302   16676 config.go:182] Loaded profile config "kindnet-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-909477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zxjj9" [84ab2bb0-2d6d-4baf-9d99-39df6fcdfcbb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zxjj9" [84ab2bb0-2d6d-4baf-9d99-39df6fcdfcbb] Running
E0704 00:00:48.300005   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/default-k8s-diff-port-698754/client.crt: no such file or directory
E0704 00:00:48.940660   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/default-k8s-diff-port-698754/client.crt: no such file or directory
E0704 00:00:50.221414   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/default-k8s-diff-port-698754/client.crt: no such file or directory
E0704 00:00:52.781732   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/default-k8s-diff-port-698754/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.009759639s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-909477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-909477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-909477 "pgrep -a kubelet"
I0704 00:01:08.402635   16676 config.go:182] Loaded profile config "custom-flannel-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-909477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-h75xf" [3c8815c7-2fbc-4a91-8910-c368ca602275] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-h75xf" [3c8815c7-2fbc-4a91-8910-c368ca602275] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004853965s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-909477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-909477 "pgrep -a kubelet"
I0704 00:01:26.720495   16676 config.go:182] Loaded profile config "false-909477": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.30.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-909477 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-pjvfz" [0f3fa3a9-6511-428a-8595-6d72c24b8ddc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0704 00:01:28.624194   16676 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18998-9391/.minikube/profiles/default-k8s-diff-port-698754/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-pjvfz" [0f3fa3a9-6511-428a-8595-6d72c24b8ddc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.00451833s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-909477 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-909477 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.13s)

                                                
                                    

Test skip (31/341)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-666302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-666302
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (2.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-909477 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-909477" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-909477

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-909477" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-909477"

                                                
                                                
----------------------- debugLogs end: cilium-909477 [took: 2.757847892s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-909477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-909477
--- SKIP: TestNetworkPlugins/group/cilium (2.90s)

                                                
                                    
Copied to clipboard