Test Report: KVM_Linux 19423

                    
                      7f7446252791c927139509879c70af875912dc64:2024-08-18:35842
                    
                

Test fail (4/340)

Order failed test Duration
82 TestFunctional/serial/ComponentHealth 1.6
155 TestGvisorAddon 6.33
297 TestNoKubernetes/serial/Start 99.38
325 TestNoKubernetes/serial/StartNoArgs 15.05
x
+
TestFunctional/serial/ComponentHealth (1.6s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-771033 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:833: etcd is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.95 PodIP:192.168.39.95 StartTime:2024-08-18 18:47:13 +0000 UTC ContainerStatuses:[{Name:etcd State:{Waiting:<nil> Running:0xc002194b10 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0021361c0} Ready:true RestartCount:3 Image:registry.k8s.io/etcd:3.5.15-0 ImageID:docker-pullable://registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a ContainerID:docker://0565a39bb52bfe91d52e6d9f0dde7ee191fcf0d165870a9643cc1a2a6c38ff63}]}
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:833: kube-apiserver is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:False} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.95 PodIP:192.168.39.95 StartTime:2024-08-18 18:48:27 +0000 UTC ContainerStatuses:[{Name:kube-apiserver State:{Waiting:<nil> Running:0xc002194b70 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:<nil>} Ready:false RestartCount:0 Image:registry.k8s.io/kube-apiserver:v1.31.0 ImageID:docker-pullable://registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf ContainerID:docker://1df53b51ee39c5227c8f0fe4bf5959801f0fdaa8c3287c7c3d3f9081e6a60d98}]}
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:833: kube-controller-manager is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.95 PodIP:192.168.39.95 StartTime:2024-08-18 18:47:13 +0000 UTC ContainerStatuses:[{Name:kube-controller-manager State:{Waiting:<nil> Running:0xc002194bd0 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc002136230} Ready:true RestartCount:3 Image:registry.k8s.io/kube-controller-manager:v1.31.0 ImageID:docker-pullable://registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d ContainerID:docker://4854ca3a4bea46ceb2056f41fe683b89cd407000aaf47da2951ae43578fa8ca8}]}
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:833: kube-scheduler is not Ready: {Phase:Running Conditions:[{Type:PodReadyToStartContainers Status:True} {Type:Initialized Status:True} {Type:Ready Status:False} {Type:ContainersReady Status:True} {Type:PodScheduled Status:True}] Message: Reason: HostIP:192.168.39.95 PodIP:192.168.39.95 StartTime:2024-08-18 18:47:13 +0000 UTC ContainerStatuses:[{Name:kube-scheduler State:{Waiting:<nil> Running:0xc002194c30 Terminated:<nil>} LastTerminationState:{Waiting:<nil> Running:<nil> Terminated:0xc0021362a0} Ready:true RestartCount:3 Image:registry.k8s.io/kube-scheduler:v1.31.0 ImageID:docker-pullable://registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808 ContainerID:docker://354afb0d0718fa574064211a716bc6faf8a82cdc3905bbec45ebc7b2154862c8}]}
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-771033 -n functional-771033
helpers_test.go:244: <<< TestFunctional/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/ComponentHealth]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 logs -n 25
helpers_test.go:252: TestFunctional/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| unpause | nospam-290448 --log_dir                                                  | nospam-290448     | jenkins | v1.33.1 | 18 Aug 24 18:44 UTC | 18 Aug 24 18:44 UTC |
	|         | /tmp/nospam-290448 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-290448 --log_dir                                                  | nospam-290448     | jenkins | v1.33.1 | 18 Aug 24 18:44 UTC | 18 Aug 24 18:44 UTC |
	|         | /tmp/nospam-290448 unpause                                               |                   |         |         |                     |                     |
	| unpause | nospam-290448 --log_dir                                                  | nospam-290448     | jenkins | v1.33.1 | 18 Aug 24 18:44 UTC | 18 Aug 24 18:44 UTC |
	|         | /tmp/nospam-290448 unpause                                               |                   |         |         |                     |                     |
	| stop    | nospam-290448 --log_dir                                                  | nospam-290448     | jenkins | v1.33.1 | 18 Aug 24 18:44 UTC | 18 Aug 24 18:44 UTC |
	|         | /tmp/nospam-290448 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-290448 --log_dir                                                  | nospam-290448     | jenkins | v1.33.1 | 18 Aug 24 18:44 UTC | 18 Aug 24 18:45 UTC |
	|         | /tmp/nospam-290448 stop                                                  |                   |         |         |                     |                     |
	| stop    | nospam-290448 --log_dir                                                  | nospam-290448     | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:45 UTC |
	|         | /tmp/nospam-290448 stop                                                  |                   |         |         |                     |                     |
	| delete  | -p nospam-290448                                                         | nospam-290448     | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:45 UTC |
	| start   | -p functional-771033                                                     | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:45 UTC | 18 Aug 24 18:46 UTC |
	|         | --memory=4000                                                            |                   |         |         |                     |                     |
	|         | --apiserver-port=8441                                                    |                   |         |         |                     |                     |
	|         | --wait=all --driver=kvm2                                                 |                   |         |         |                     |                     |
	| start   | -p functional-771033                                                     | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | --alsologtostderr -v=8                                                   |                   |         |         |                     |                     |
	| cache   | functional-771033 cache add                                              | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | functional-771033 cache add                                              | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | functional-771033 cache add                                              | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-771033 cache add                                              | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | minikube-local-cache-test:functional-771033                              |                   |         |         |                     |                     |
	| cache   | functional-771033 cache delete                                           | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | minikube-local-cache-test:functional-771033                              |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | registry.k8s.io/pause:3.3                                                |                   |         |         |                     |                     |
	| cache   | list                                                                     | minikube          | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	| ssh     | functional-771033 ssh sudo                                               | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | crictl images                                                            |                   |         |         |                     |                     |
	| ssh     | functional-771033                                                        | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | ssh sudo docker rmi                                                      |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| ssh     | functional-771033 ssh                                                    | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC |                     |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | functional-771033 cache reload                                           | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	| ssh     | functional-771033 ssh                                                    | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | sudo crictl inspecti                                                     |                   |         |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | registry.k8s.io/pause:3.1                                                |                   |         |         |                     |                     |
	| cache   | delete                                                                   | minikube          | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | registry.k8s.io/pause:latest                                             |                   |         |         |                     |                     |
	| kubectl | functional-771033 kubectl --                                             | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:46 UTC |
	|         | --context functional-771033                                              |                   |         |         |                     |                     |
	|         | get pods                                                                 |                   |         |         |                     |                     |
	| start   | -p functional-771033                                                     | functional-771033 | jenkins | v1.33.1 | 18 Aug 24 18:46 UTC | 18 Aug 24 18:48 UTC |
	|         | --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision |                   |         |         |                     |                     |
	|         | --wait=all                                                               |                   |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:46:54
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:46:54.632770 1158634 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:46:54.633019 1158634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:46:54.633023 1158634 out.go:358] Setting ErrFile to fd 2...
	I0818 18:46:54.633026 1158634 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:46:54.633182 1158634 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	I0818 18:46:54.633724 1158634 out.go:352] Setting JSON to false
	I0818 18:46:54.634591 1158634 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":98916,"bootTime":1723907899,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:46:54.634643 1158634 start.go:139] virtualization: kvm guest
	I0818 18:46:54.636617 1158634 out.go:177] * [functional-771033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:46:54.637665 1158634 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:46:54.637713 1158634 notify.go:220] Checking for updates...
	I0818 18:46:54.639803 1158634 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:46:54.640852 1158634 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	I0818 18:46:54.641935 1158634 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	I0818 18:46:54.642983 1158634 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:46:54.644069 1158634 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:46:54.645476 1158634 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 18:46:54.645562 1158634 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:46:54.645961 1158634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:46:54.646018 1158634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:46:54.662338 1158634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39791
	I0818 18:46:54.662755 1158634 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:46:54.663343 1158634 main.go:141] libmachine: Using API Version  1
	I0818 18:46:54.663352 1158634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:46:54.663677 1158634 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:46:54.663843 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:54.695841 1158634 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 18:46:54.696931 1158634 start.go:297] selected driver: kvm2
	I0818 18:46:54.696946 1158634 start.go:901] validating driver "kvm2" against &{Name:functional-771033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-771033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:46:54.697048 1158634 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:46:54.697411 1158634 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:46:54.697478 1158634 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-1145725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 18:46:54.712701 1158634 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 18:46:54.713558 1158634 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:46:54.713632 1158634 cni.go:84] Creating CNI manager for ""
	I0818 18:46:54.713644 1158634 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 18:46:54.713707 1158634 start.go:340] cluster config:
	{Name:functional-771033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-771033 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:46:54.713817 1158634 iso.go:125] acquiring lock: {Name:mkb8cace5317b9fbdd5a745866acff5ebdb0878a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:46:54.715438 1158634 out.go:177] * Starting "functional-771033" primary control-plane node in "functional-771033" cluster
	I0818 18:46:54.716610 1158634 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 18:46:54.716640 1158634 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19423-1145725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4
	I0818 18:46:54.716646 1158634 cache.go:56] Caching tarball of preloaded images
	I0818 18:46:54.716724 1158634 preload.go:172] Found /home/jenkins/minikube-integration/19423-1145725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0818 18:46:54.716730 1158634 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker
	I0818 18:46:54.716821 1158634 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/config.json ...
	I0818 18:46:54.717022 1158634 start.go:360] acquireMachinesLock for functional-771033: {Name:mk27543e6fe57e5c3e2e26d5ee14b83b659b1354 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0818 18:46:54.717059 1158634 start.go:364] duration metric: took 25.114µs to acquireMachinesLock for "functional-771033"
	I0818 18:46:54.717070 1158634 start.go:96] Skipping create...Using existing machine configuration
	I0818 18:46:54.717073 1158634 fix.go:54] fixHost starting: 
	I0818 18:46:54.717403 1158634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:46:54.717432 1158634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:46:54.732563 1158634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0818 18:46:54.732989 1158634 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:46:54.733575 1158634 main.go:141] libmachine: Using API Version  1
	I0818 18:46:54.733600 1158634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:46:54.733932 1158634 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:46:54.734157 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:54.734286 1158634 main.go:141] libmachine: (functional-771033) Calling .GetState
	I0818 18:46:54.735768 1158634 fix.go:112] recreateIfNeeded on functional-771033: state=Running err=<nil>
	W0818 18:46:54.735798 1158634 fix.go:138] unexpected machine state, will restart: <nil>
	I0818 18:46:54.737372 1158634 out.go:177] * Updating the running kvm2 "functional-771033" VM ...
	I0818 18:46:54.738358 1158634 machine.go:93] provisionDockerMachine start ...
	I0818 18:46:54.738369 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:54.738578 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:54.740768 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:54.741072 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:54.741082 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:54.741190 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:54.741360 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:54.741504 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:54.741611 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:54.741755 1158634 main.go:141] libmachine: Using SSH client type: native
	I0818 18:46:54.741944 1158634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0818 18:46:54.741950 1158634 main.go:141] libmachine: About to run SSH command:
	hostname
	I0818 18:46:54.853417 1158634 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-771033
	
	I0818 18:46:54.853442 1158634 main.go:141] libmachine: (functional-771033) Calling .GetMachineName
	I0818 18:46:54.853722 1158634 buildroot.go:166] provisioning hostname "functional-771033"
	I0818 18:46:54.853743 1158634 main.go:141] libmachine: (functional-771033) Calling .GetMachineName
	I0818 18:46:54.853984 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:54.856694 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:54.857047 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:54.857072 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:54.857239 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:54.857447 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:54.857583 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:54.857718 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:54.857851 1158634 main.go:141] libmachine: Using SSH client type: native
	I0818 18:46:54.858089 1158634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0818 18:46:54.858100 1158634 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-771033 && echo "functional-771033" | sudo tee /etc/hostname
	I0818 18:46:54.988390 1158634 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-771033
	
	I0818 18:46:54.988414 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:54.991354 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:54.991738 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:54.991764 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:54.991958 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:54.992161 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:54.992343 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:54.992496 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:54.992661 1158634 main.go:141] libmachine: Using SSH client type: native
	I0818 18:46:54.992840 1158634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0818 18:46:54.992851 1158634 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-771033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-771033/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-771033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0818 18:46:55.111092 1158634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:46:55.111112 1158634 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/19423-1145725/.minikube CaCertPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19423-1145725/.minikube}
	I0818 18:46:55.111154 1158634 buildroot.go:174] setting up certificates
	I0818 18:46:55.111164 1158634 provision.go:84] configureAuth start
	I0818 18:46:55.111174 1158634 main.go:141] libmachine: (functional-771033) Calling .GetMachineName
	I0818 18:46:55.111460 1158634 main.go:141] libmachine: (functional-771033) Calling .GetIP
	I0818 18:46:55.114176 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.114492 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:55.114524 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.114639 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:55.116957 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.117292 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:55.117313 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.117426 1158634 provision.go:143] copyHostCerts
	I0818 18:46:55.117489 1158634 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1145725/.minikube/ca.pem, removing ...
	I0818 18:46:55.117506 1158634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1145725/.minikube/ca.pem
	I0818 18:46:55.117572 1158634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19423-1145725/.minikube/ca.pem (1078 bytes)
	I0818 18:46:55.117675 1158634 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1145725/.minikube/cert.pem, removing ...
	I0818 18:46:55.117678 1158634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1145725/.minikube/cert.pem
	I0818 18:46:55.117701 1158634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19423-1145725/.minikube/cert.pem (1123 bytes)
	I0818 18:46:55.117809 1158634 exec_runner.go:144] found /home/jenkins/minikube-integration/19423-1145725/.minikube/key.pem, removing ...
	I0818 18:46:55.117813 1158634 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19423-1145725/.minikube/key.pem
	I0818 18:46:55.117839 1158634 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19423-1145725/.minikube/key.pem (1679 bytes)
	I0818 18:46:55.117902 1158634 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19423-1145725/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19423-1145725/.minikube/certs/ca-key.pem org=jenkins.functional-771033 san=[127.0.0.1 192.168.39.95 functional-771033 localhost minikube]
	I0818 18:46:55.306636 1158634 provision.go:177] copyRemoteCerts
	I0818 18:46:55.306691 1158634 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0818 18:46:55.306718 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:55.309787 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.310111 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:55.310136 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.310298 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:55.310519 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:55.310701 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:55.310801 1158634 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
	I0818 18:46:55.395300 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0818 18:46:55.422858 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0818 18:46:55.447353 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0818 18:46:55.471923 1158634 provision.go:87] duration metric: took 360.744563ms to configureAuth
	I0818 18:46:55.471946 1158634 buildroot.go:189] setting minikube options for container-runtime
	I0818 18:46:55.472139 1158634 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 18:46:55.472176 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:55.472491 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:55.475271 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.475629 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:55.475654 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.475755 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:55.475951 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:55.476110 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:55.476229 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:55.476417 1158634 main.go:141] libmachine: Using SSH client type: native
	I0818 18:46:55.476588 1158634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0818 18:46:55.476593 1158634 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0818 18:46:55.591442 1158634 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0818 18:46:55.591456 1158634 buildroot.go:70] root file system type: tmpfs
	I0818 18:46:55.591582 1158634 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0818 18:46:55.591601 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:55.594651 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.594985 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:55.595012 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.595253 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:55.595441 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:55.595607 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:55.595717 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:55.595860 1158634 main.go:141] libmachine: Using SSH client type: native
	I0818 18:46:55.596031 1158634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0818 18:46:55.596081 1158634 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0818 18:46:55.724197 1158634 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=kvm2 --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0818 18:46:55.724219 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:55.726837 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.727107 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:55.727127 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.727277 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:55.727441 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:55.727646 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:55.727826 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:55.728001 1158634 main.go:141] libmachine: Using SSH client type: native
	I0818 18:46:55.728176 1158634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0818 18:46:55.728187 1158634 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0818 18:46:55.847629 1158634 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0818 18:46:55.847665 1158634 machine.go:96] duration metric: took 1.109284588s to provisionDockerMachine
	I0818 18:46:55.847682 1158634 start.go:293] postStartSetup for "functional-771033" (driver="kvm2")
	I0818 18:46:55.847695 1158634 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0818 18:46:55.847719 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:55.848060 1158634 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0818 18:46:55.848100 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:55.850885 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.851295 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:55.851314 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:55.851421 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:55.851743 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:55.851891 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:55.852011 1158634 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
	I0818 18:46:55.940830 1158634 ssh_runner.go:195] Run: cat /etc/os-release
	I0818 18:46:55.945224 1158634 info.go:137] Remote host: Buildroot 2023.02.9
	I0818 18:46:55.945242 1158634 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1145725/.minikube/addons for local assets ...
	I0818 18:46:55.945305 1158634 filesync.go:126] Scanning /home/jenkins/minikube-integration/19423-1145725/.minikube/files for local assets ...
	I0818 18:46:55.945385 1158634 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-1145725/.minikube/files/etc/ssl/certs/11529002.pem -> 11529002.pem in /etc/ssl/certs
	I0818 18:46:55.945465 1158634 filesync.go:149] local asset: /home/jenkins/minikube-integration/19423-1145725/.minikube/files/etc/test/nested/copy/1152900/hosts -> hosts in /etc/test/nested/copy/1152900
	I0818 18:46:55.945503 1158634 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/1152900
	I0818 18:46:55.955184 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/files/etc/ssl/certs/11529002.pem --> /etc/ssl/certs/11529002.pem (1708 bytes)
	I0818 18:46:55.978846 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/files/etc/test/nested/copy/1152900/hosts --> /etc/test/nested/copy/1152900/hosts (40 bytes)
	I0818 18:46:56.002774 1158634 start.go:296] duration metric: took 155.07696ms for postStartSetup
	I0818 18:46:56.002812 1158634 fix.go:56] duration metric: took 1.285738596s for fixHost
	I0818 18:46:56.002836 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:56.006062 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:56.006460 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:56.006487 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:56.006672 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:56.006877 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:56.007125 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:56.007227 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:56.007458 1158634 main.go:141] libmachine: Using SSH client type: native
	I0818 18:46:56.007634 1158634 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82f9c0] 0x832720 <nil>  [] 0s} 192.168.39.95 22 <nil> <nil>}
	I0818 18:46:56.007640 1158634 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0818 18:46:56.126457 1158634 main.go:141] libmachine: SSH cmd err, output: <nil>: 1724006816.105509244
	
	I0818 18:46:56.126471 1158634 fix.go:216] guest clock: 1724006816.105509244
	I0818 18:46:56.126477 1158634 fix.go:229] Guest: 2024-08-18 18:46:56.105509244 +0000 UTC Remote: 2024-08-18 18:46:56.002815287 +0000 UTC m=+1.404895046 (delta=102.693957ms)
	I0818 18:46:56.126502 1158634 fix.go:200] guest clock delta is within tolerance: 102.693957ms
	I0818 18:46:56.126508 1158634 start.go:83] releasing machines lock for "functional-771033", held for 1.409443045s
	I0818 18:46:56.126529 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:56.126839 1158634 main.go:141] libmachine: (functional-771033) Calling .GetIP
	I0818 18:46:56.129589 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:56.129978 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:56.129998 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:56.130094 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:56.130720 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:56.130901 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:46:56.130970 1158634 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0818 18:46:56.131010 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:56.131080 1158634 ssh_runner.go:195] Run: cat /version.json
	I0818 18:46:56.131106 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:46:56.133850 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:56.134101 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:56.134219 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:56.134238 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:56.134395 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:56.134501 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:46:56.134524 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:46:56.134559 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:56.134723 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:56.134723 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:46:56.134887 1158634 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
	I0818 18:46:56.134961 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:46:56.135075 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:46:56.135211 1158634 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
	I0818 18:46:56.232976 1158634 ssh_runner.go:195] Run: systemctl --version
	I0818 18:46:56.238916 1158634 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0818 18:46:56.245197 1158634 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0818 18:46:56.245256 1158634 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0818 18:46:56.254202 1158634 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0818 18:46:56.254216 1158634 start.go:495] detecting cgroup driver to use...
	I0818 18:46:56.254343 1158634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:46:56.273157 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0818 18:46:56.283299 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0818 18:46:56.293260 1158634 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0818 18:46:56.293309 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0818 18:46:56.304206 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 18:46:56.315128 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0818 18:46:56.325639 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0818 18:46:56.337923 1158634 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0818 18:46:56.349286 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0818 18:46:56.360304 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0818 18:46:56.370962 1158634 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0818 18:46:56.381221 1158634 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0818 18:46:56.391398 1158634 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0818 18:46:56.401588 1158634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:46:56.560635 1158634 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0818 18:46:56.586933 1158634 start.go:495] detecting cgroup driver to use...
	I0818 18:46:56.587031 1158634 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0818 18:46:56.608293 1158634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:46:56.624116 1158634 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0818 18:46:56.644433 1158634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0818 18:46:56.660301 1158634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0818 18:46:56.674321 1158634 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0818 18:46:56.693581 1158634 ssh_runner.go:195] Run: which cri-dockerd
	I0818 18:46:56.697373 1158634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0818 18:46:56.708244 1158634 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0818 18:46:56.725176 1158634 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0818 18:46:56.885397 1158634 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0818 18:46:57.044654 1158634 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0818 18:46:57.044781 1158634 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0818 18:46:57.063853 1158634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:46:57.217602 1158634 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0818 18:47:09.820788 1158634 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.603153107s)
	I0818 18:47:09.820854 1158634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0818 18:47:09.837295 1158634 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0818 18:47:09.864161 1158634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 18:47:09.878286 1158634 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0818 18:47:10.003048 1158634 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0818 18:47:10.146053 1158634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:47:10.274428 1158634 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0818 18:47:10.292400 1158634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0818 18:47:10.305819 1158634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:47:10.436130 1158634 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0818 18:47:10.540991 1158634 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0818 18:47:10.541053 1158634 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0818 18:47:10.546726 1158634 start.go:563] Will wait 60s for crictl version
	I0818 18:47:10.546776 1158634 ssh_runner.go:195] Run: which crictl
	I0818 18:47:10.550839 1158634 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0818 18:47:10.588418 1158634 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.1.2
	RuntimeApiVersion:  v1
	I0818 18:47:10.588476 1158634 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 18:47:10.611634 1158634 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0818 18:47:10.638535 1158634 out.go:235] * Preparing Kubernetes v1.31.0 on Docker 27.1.2 ...
	I0818 18:47:10.638593 1158634 main.go:141] libmachine: (functional-771033) Calling .GetIP
	I0818 18:47:10.641528 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:47:10.641841 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:47:10.641860 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:47:10.642158 1158634 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0818 18:47:10.647876 1158634 out.go:177]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I0818 18:47:10.648927 1158634 kubeadm.go:883] updating cluster {Name:functional-771033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.0 ClusterName:functional-771033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mo
untString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0818 18:47:10.649038 1158634 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker
	I0818 18:47:10.649093 1158634 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 18:47:10.665729 1158634 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-771033
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0818 18:47:10.665743 1158634 docker.go:615] Images already preloaded, skipping extraction
	I0818 18:47:10.665813 1158634 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0818 18:47:10.684051 1158634 docker.go:685] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-771033
	registry.k8s.io/kube-scheduler:v1.31.0
	registry.k8s.io/kube-apiserver:v1.31.0
	registry.k8s.io/kube-controller-manager:v1.31.0
	registry.k8s.io/kube-proxy:v1.31.0
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I0818 18:47:10.684077 1158634 cache_images.go:84] Images are preloaded, skipping loading
	I0818 18:47:10.684097 1158634 kubeadm.go:934] updating node { 192.168.39.95 8441 v1.31.0 docker true true} ...
	I0818 18:47:10.684244 1158634 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-771033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:functional-771033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0818 18:47:10.684320 1158634 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0818 18:47:10.738070 1158634 extraconfig.go:124] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I0818 18:47:10.738178 1158634 cni.go:84] Creating CNI manager for ""
	I0818 18:47:10.738198 1158634 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 18:47:10.738209 1158634 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0818 18:47:10.738236 1158634 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.95 APIServerPort:8441 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-771033 NodeName:functional-771033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.95"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.95 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOp
ts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0818 18:47:10.738411 1158634 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.95
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-771033"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.95
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0818 18:47:10.738476 1158634 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0818 18:47:10.748279 1158634 binaries.go:44] Found k8s binaries, skipping transfer
	I0818 18:47:10.748338 1158634 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0818 18:47:10.757659 1158634 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0818 18:47:10.774794 1158634 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0818 18:47:10.790671 1158634 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2012 bytes)
	I0818 18:47:10.806965 1158634 ssh_runner.go:195] Run: grep 192.168.39.95	control-plane.minikube.internal$ /etc/hosts
	I0818 18:47:10.811624 1158634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:47:10.953614 1158634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:47:10.972513 1158634 certs.go:68] Setting up /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033 for IP: 192.168.39.95
	I0818 18:47:10.972530 1158634 certs.go:194] generating shared ca certs ...
	I0818 18:47:10.972553 1158634 certs.go:226] acquiring lock for ca certs: {Name:mk13776990cc7cce8623bb9f7048b7fd53736611 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:47:10.972757 1158634 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/ca.key
	I0818 18:47:10.972812 1158634 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/proxy-client-ca.key
	I0818 18:47:10.972822 1158634 certs.go:256] generating profile certs ...
	I0818 18:47:10.972945 1158634 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.key
	I0818 18:47:10.973007 1158634 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/apiserver.key.cf650114
	I0818 18:47:10.973054 1158634 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/proxy-client.key
	I0818 18:47:10.973200 1158634 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/1152900.pem (1338 bytes)
	W0818 18:47:10.973284 1158634 certs.go:480] ignoring /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/1152900_empty.pem, impossibly tiny 0 bytes
	I0818 18:47:10.973293 1158634 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/ca-key.pem (1675 bytes)
	I0818 18:47:10.973328 1158634 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/ca.pem (1078 bytes)
	I0818 18:47:10.973359 1158634 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/cert.pem (1123 bytes)
	I0818 18:47:10.973384 1158634 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/key.pem (1679 bytes)
	I0818 18:47:10.973435 1158634 certs.go:484] found cert: /home/jenkins/minikube-integration/19423-1145725/.minikube/files/etc/ssl/certs/11529002.pem (1708 bytes)
	I0818 18:47:10.974350 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0818 18:47:11.039202 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0818 18:47:11.112193 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0818 18:47:11.151405 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0818 18:47:11.184940 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0818 18:47:11.222071 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0818 18:47:11.255814 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0818 18:47:11.296026 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0818 18:47:11.327001 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/certs/1152900.pem --> /usr/share/ca-certificates/1152900.pem (1338 bytes)
	I0818 18:47:11.357765 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/files/etc/ssl/certs/11529002.pem --> /usr/share/ca-certificates/11529002.pem (1708 bytes)
	I0818 18:47:11.393740 1158634 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19423-1145725/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0818 18:47:11.433484 1158634 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0818 18:47:11.502541 1158634 ssh_runner.go:195] Run: openssl version
	I0818 18:47:11.514444 1158634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1152900.pem && ln -fs /usr/share/ca-certificates/1152900.pem /etc/ssl/certs/1152900.pem"
	I0818 18:47:11.543132 1158634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1152900.pem
	I0818 18:47:11.556068 1158634 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Aug 18 18:45 /usr/share/ca-certificates/1152900.pem
	I0818 18:47:11.556171 1158634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1152900.pem
	I0818 18:47:11.575627 1158634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1152900.pem /etc/ssl/certs/51391683.0"
	I0818 18:47:11.604967 1158634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11529002.pem && ln -fs /usr/share/ca-certificates/11529002.pem /etc/ssl/certs/11529002.pem"
	I0818 18:47:11.631266 1158634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11529002.pem
	I0818 18:47:11.638568 1158634 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Aug 18 18:45 /usr/share/ca-certificates/11529002.pem
	I0818 18:47:11.638636 1158634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11529002.pem
	I0818 18:47:11.656890 1158634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11529002.pem /etc/ssl/certs/3ec20f2e.0"
	I0818 18:47:11.681063 1158634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0818 18:47:11.701548 1158634 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:47:11.710610 1158634 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 18 18:38 /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:47:11.710682 1158634 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0818 18:47:11.718975 1158634 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0818 18:47:11.733642 1158634 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0818 18:47:11.740563 1158634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0818 18:47:11.750590 1158634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0818 18:47:11.765553 1158634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0818 18:47:11.774055 1158634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0818 18:47:11.781486 1158634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0818 18:47:11.791556 1158634 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0818 18:47:11.800107 1158634 kubeadm.go:392] StartCluster: {Name:functional-771033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.0 ClusterName:functional-771033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false Mount
String:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:47:11.800233 1158634 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 18:47:11.837654 1158634 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0818 18:47:11.858225 1158634 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0818 18:47:11.858237 1158634 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0818 18:47:11.858288 1158634 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0818 18:47:11.871516 1158634 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0818 18:47:11.872236 1158634 kubeconfig.go:125] found "functional-771033" server: "https://192.168.39.95:8441"
	I0818 18:47:11.874090 1158634 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0818 18:47:11.886177 1158634 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -22,7 +22,7 @@
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.39.95"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    enable-admission-plugins: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     allocate-node-cidrs: "true"
	
	-- /stdout --
	I0818 18:47:11.886201 1158634 kubeadm.go:1160] stopping kube-system containers ...
	I0818 18:47:11.886263 1158634 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0818 18:47:11.952557 1158634 docker.go:483] Stopping containers: [c4de3f53a26d e1a825e0ca44 5ad077dac72a 35ee16d13b79 d4a6727a157c c5771ab1f6ce 145b77ad63ec cde473f8e984 3537ffdbdfbb dc0bf8140a35 9e1e1cee3c41 163d5c0124cc 3c04014a97de 342f9a57301e b8dd751802bb b3348056df2c dae9f1201bf6 a6bf6cd3f233 759c12c404eb a706b381bbcf 720e87f4d1c6 e5293e30dad2 522687fd9f11 c804893854e6 533898c09d04 b5f81e9ed1e2 6f9672488822 4730bd898530 864379f94945 03033ee26187 5a6d47de9a00 14a3c6896f5d bab252c4dcac c0e3f9792225]
	I0818 18:47:11.952651 1158634 ssh_runner.go:195] Run: docker stop c4de3f53a26d e1a825e0ca44 5ad077dac72a 35ee16d13b79 d4a6727a157c c5771ab1f6ce 145b77ad63ec cde473f8e984 3537ffdbdfbb dc0bf8140a35 9e1e1cee3c41 163d5c0124cc 3c04014a97de 342f9a57301e b8dd751802bb b3348056df2c dae9f1201bf6 a6bf6cd3f233 759c12c404eb a706b381bbcf 720e87f4d1c6 e5293e30dad2 522687fd9f11 c804893854e6 533898c09d04 b5f81e9ed1e2 6f9672488822 4730bd898530 864379f94945 03033ee26187 5a6d47de9a00 14a3c6896f5d bab252c4dcac c0e3f9792225
	I0818 18:47:12.451354 1158634 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0818 18:47:12.498570 1158634 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0818 18:47:12.508705 1158634 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Aug 18 18:45 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 Aug 18 18:46 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Aug 18 18:45 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 Aug 18 18:46 /etc/kubernetes/scheduler.conf
	
	I0818 18:47:12.508763 1158634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0818 18:47:12.517439 1158634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0818 18:47:12.526044 1158634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0818 18:47:12.535422 1158634 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0818 18:47:12.535463 1158634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0818 18:47:12.544396 1158634 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0818 18:47:12.552728 1158634 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0818 18:47:12.552769 1158634 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0818 18:47:12.561428 1158634 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0818 18:47:12.570462 1158634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 18:47:12.619724 1158634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 18:47:13.497984 1158634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0818 18:47:13.703218 1158634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 18:47:13.796445 1158634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0818 18:47:13.903564 1158634 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:47:13.903650 1158634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:47:14.404736 1158634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:47:14.904473 1158634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:47:14.924509 1158634 api_server.go:72] duration metric: took 1.020961281s to wait for apiserver process to appear ...
	I0818 18:47:14.924527 1158634 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:47:14.924552 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:14.925020 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:15.425638 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:18.104673 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 18:47:18.104698 1158634 api_server.go:103] status: https://192.168.39.95:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 18:47:18.104712 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:18.142465 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 18:47:18.142489 1158634 api_server.go:103] status: https://192.168.39.95:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 18:47:18.424879 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:18.431392 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 18:47:18.431412 1158634 api_server.go:103] status: https://192.168.39.95:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 18:47:18.924942 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:18.931936 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 18:47:18.931970 1158634 api_server.go:103] status: https://192.168.39.95:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 18:47:19.425592 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:19.437153 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 200:
	ok
	I0818 18:47:19.462483 1158634 api_server.go:141] control plane version: v1.31.0
	I0818 18:47:19.462508 1158634 api_server.go:131] duration metric: took 4.537975265s to wait for apiserver health ...
	I0818 18:47:19.462524 1158634 cni.go:84] Creating CNI manager for ""
	I0818 18:47:19.462536 1158634 cni.go:158] "kvm2" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0818 18:47:19.463979 1158634 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0818 18:47:19.465625 1158634 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0818 18:47:19.484837 1158634 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0818 18:47:19.527700 1158634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:47:19.539937 1158634 system_pods.go:59] 7 kube-system pods found
	I0818 18:47:19.539960 1158634 system_pods.go:61] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0818 18:47:19.539969 1158634 system_pods.go:61] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0818 18:47:19.539976 1158634 system_pods.go:61] "kube-apiserver-functional-771033" [50144038-6f50-40ec-91d8-5c6157da045a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 18:47:19.539981 1158634 system_pods.go:61] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0818 18:47:19.539986 1158634 system_pods.go:61] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0818 18:47:19.539995 1158634 system_pods.go:61] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0818 18:47:19.539999 1158634 system_pods.go:61] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:47:19.540007 1158634 system_pods.go:74] duration metric: took 12.291854ms to wait for pod list to return data ...
	I0818 18:47:19.540013 1158634 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:47:19.544841 1158634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:47:19.544862 1158634 node_conditions.go:123] node cpu capacity is 2
	I0818 18:47:19.544876 1158634 node_conditions.go:105] duration metric: took 4.858343ms to run NodePressure ...
	I0818 18:47:19.544898 1158634 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0818 18:47:19.877110 1158634 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0818 18:47:19.881450 1158634 kubeadm.go:739] kubelet initialised
	I0818 18:47:19.881459 1158634 kubeadm.go:740] duration metric: took 4.334239ms waiting for restarted kubelet to initialise ...
	I0818 18:47:19.881466 1158634 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:47:19.885868 1158634 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6f6b679f8f-jr2fb" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:21.897624 1158634 pod_ready.go:103] pod "coredns-6f6b679f8f-jr2fb" in "kube-system" namespace has status "Ready":"False"
	I0818 18:47:22.393122 1158634 pod_ready.go:93] pod "coredns-6f6b679f8f-jr2fb" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:22.393135 1158634 pod_ready.go:82] duration metric: took 2.507253626s for pod "coredns-6f6b679f8f-jr2fb" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:22.393144 1158634 pod_ready.go:79] waiting up to 4m0s for pod "etcd-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:24.399711 1158634 pod_ready.go:103] pod "etcd-functional-771033" in "kube-system" namespace has status "Ready":"False"
	I0818 18:47:26.899570 1158634 pod_ready.go:103] pod "etcd-functional-771033" in "kube-system" namespace has status "Ready":"False"
	I0818 18:47:28.901947 1158634 pod_ready.go:93] pod "etcd-functional-771033" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:28.901963 1158634 pod_ready.go:82] duration metric: took 6.50881179s for pod "etcd-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:28.901973 1158634 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:28.908902 1158634 pod_ready.go:93] pod "kube-apiserver-functional-771033" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:28.908913 1158634 pod_ready.go:82] duration metric: took 6.933053ms for pod "kube-apiserver-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:28.908923 1158634 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:28.913147 1158634 pod_ready.go:93] pod "kube-controller-manager-functional-771033" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:28.913156 1158634 pod_ready.go:82] duration metric: took 4.226132ms for pod "kube-controller-manager-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:28.913165 1158634 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-f6krv" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:28.917153 1158634 pod_ready.go:93] pod "kube-proxy-f6krv" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:28.917162 1158634 pod_ready.go:82] duration metric: took 3.990886ms for pod "kube-proxy-f6krv" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:28.917171 1158634 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:30.924201 1158634 pod_ready.go:103] pod "kube-scheduler-functional-771033" in "kube-system" namespace has status "Ready":"False"
	I0818 18:47:31.923801 1158634 pod_ready.go:93] pod "kube-scheduler-functional-771033" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:31.923814 1158634 pod_ready.go:82] duration metric: took 3.006637067s for pod "kube-scheduler-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:31.923822 1158634 pod_ready.go:39] duration metric: took 12.042348471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:47:31.923839 1158634 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0818 18:47:31.936031 1158634 ops.go:34] apiserver oom_adj: -16
	I0818 18:47:31.936045 1158634 kubeadm.go:597] duration metric: took 20.077802345s to restartPrimaryControlPlane
	I0818 18:47:31.936052 1158634 kubeadm.go:394] duration metric: took 20.135957474s to StartCluster
	I0818 18:47:31.936071 1158634 settings.go:142] acquiring lock: {Name:mk4f0ebfd92664d0e6b948f3537153d6e758f3b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:47:31.936144 1158634 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19423-1145725/kubeconfig
	I0818 18:47:31.936920 1158634 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1145725/kubeconfig: {Name:mk782a24d297ff5aa7e33558024aff5df1a9ed4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:47:31.937222 1158634 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0818 18:47:31.937262 1158634 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0818 18:47:31.937324 1158634 addons.go:69] Setting storage-provisioner=true in profile "functional-771033"
	I0818 18:47:31.937381 1158634 addons.go:234] Setting addon storage-provisioner=true in "functional-771033"
	W0818 18:47:31.937386 1158634 addons.go:243] addon storage-provisioner should already be in state true
	I0818 18:47:31.937376 1158634 addons.go:69] Setting default-storageclass=true in profile "functional-771033"
	I0818 18:47:31.937413 1158634 host.go:66] Checking if "functional-771033" exists ...
	I0818 18:47:31.937426 1158634 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-771033"
	I0818 18:47:31.937443 1158634 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 18:47:31.937726 1158634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:47:31.937768 1158634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:47:31.937815 1158634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:47:31.937852 1158634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:47:31.938901 1158634 out.go:177] * Verifying Kubernetes components...
	I0818 18:47:31.940517 1158634 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0818 18:47:31.953501 1158634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43771
	I0818 18:47:31.954090 1158634 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:47:31.954674 1158634 main.go:141] libmachine: Using API Version  1
	I0818 18:47:31.954690 1158634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:47:31.955028 1158634 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:47:31.955206 1158634 main.go:141] libmachine: (functional-771033) Calling .GetState
	I0818 18:47:31.957681 1158634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43031
	I0818 18:47:31.958049 1158634 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:47:31.958055 1158634 addons.go:234] Setting addon default-storageclass=true in "functional-771033"
	W0818 18:47:31.958067 1158634 addons.go:243] addon default-storageclass should already be in state true
	I0818 18:47:31.958095 1158634 host.go:66] Checking if "functional-771033" exists ...
	I0818 18:47:31.958520 1158634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:47:31.958553 1158634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:47:31.958597 1158634 main.go:141] libmachine: Using API Version  1
	I0818 18:47:31.958614 1158634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:47:31.958984 1158634 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:47:31.959655 1158634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:47:31.959695 1158634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:47:31.973880 1158634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34067
	I0818 18:47:31.974206 1158634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39991
	I0818 18:47:31.974439 1158634 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:47:31.974637 1158634 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:47:31.974904 1158634 main.go:141] libmachine: Using API Version  1
	I0818 18:47:31.974920 1158634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:47:31.975086 1158634 main.go:141] libmachine: Using API Version  1
	I0818 18:47:31.975106 1158634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:47:31.975186 1158634 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:47:31.975449 1158634 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:47:31.975653 1158634 main.go:141] libmachine: (functional-771033) Calling .GetState
	I0818 18:47:31.975769 1158634 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:47:31.975807 1158634 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:47:31.977426 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:47:31.979235 1158634 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0818 18:47:31.980577 1158634 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:47:31.980586 1158634 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0818 18:47:31.980599 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:47:31.983819 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:47:31.984327 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:47:31.984352 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:47:31.984480 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:47:31.984660 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:47:31.984781 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:47:31.984915 1158634 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
	I0818 18:47:31.991731 1158634 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39947
	I0818 18:47:31.992291 1158634 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:47:31.992754 1158634 main.go:141] libmachine: Using API Version  1
	I0818 18:47:31.992763 1158634 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:47:31.993075 1158634 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:47:31.993294 1158634 main.go:141] libmachine: (functional-771033) Calling .GetState
	I0818 18:47:31.994772 1158634 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:47:31.994999 1158634 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0818 18:47:31.995009 1158634 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0818 18:47:31.995027 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
	I0818 18:47:31.997976 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:47:31.998407 1158634 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
	I0818 18:47:31.998441 1158634 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
	I0818 18:47:31.998592 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
	I0818 18:47:31.998756 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
	I0818 18:47:31.998900 1158634 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
	I0818 18:47:31.999089 1158634 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
	I0818 18:47:32.139907 1158634 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0818 18:47:32.154798 1158634 node_ready.go:35] waiting up to 6m0s for node "functional-771033" to be "Ready" ...
	I0818 18:47:32.159114 1158634 node_ready.go:49] node "functional-771033" has status "Ready":"True"
	I0818 18:47:32.159128 1158634 node_ready.go:38] duration metric: took 4.301629ms for node "functional-771033" to be "Ready" ...
	I0818 18:47:32.159139 1158634 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:47:32.165516 1158634 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-jr2fb" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:32.171715 1158634 pod_ready.go:93] pod "coredns-6f6b679f8f-jr2fb" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:32.171724 1158634 pod_ready.go:82] duration metric: took 6.197121ms for pod "coredns-6f6b679f8f-jr2fb" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:32.171732 1158634 pod_ready.go:79] waiting up to 6m0s for pod "etcd-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:32.226394 1158634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0818 18:47:32.240011 1158634 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0818 18:47:32.498125 1158634 pod_ready.go:93] pod "etcd-functional-771033" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:32.498147 1158634 pod_ready.go:82] duration metric: took 326.408239ms for pod "etcd-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:32.498159 1158634 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:32.828155 1158634 main.go:141] libmachine: Making call to close driver server
	I0818 18:47:32.828160 1158634 main.go:141] libmachine: Making call to close driver server
	I0818 18:47:32.828173 1158634 main.go:141] libmachine: (functional-771033) Calling .Close
	I0818 18:47:32.828175 1158634 main.go:141] libmachine: (functional-771033) Calling .Close
	I0818 18:47:32.828502 1158634 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:47:32.828514 1158634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:47:32.828522 1158634 main.go:141] libmachine: Making call to close driver server
	I0818 18:47:32.828528 1158634 main.go:141] libmachine: (functional-771033) Calling .Close
	I0818 18:47:32.828663 1158634 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
	I0818 18:47:32.828678 1158634 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:47:32.828684 1158634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:47:32.828692 1158634 main.go:141] libmachine: Making call to close driver server
	I0818 18:47:32.828699 1158634 main.go:141] libmachine: (functional-771033) Calling .Close
	I0818 18:47:32.828787 1158634 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:47:32.828793 1158634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:47:32.829001 1158634 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
	I0818 18:47:32.829022 1158634 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:47:32.829026 1158634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:47:32.835455 1158634 main.go:141] libmachine: Making call to close driver server
	I0818 18:47:32.835463 1158634 main.go:141] libmachine: (functional-771033) Calling .Close
	I0818 18:47:32.835687 1158634 main.go:141] libmachine: Successfully made call to close driver server
	I0818 18:47:32.835696 1158634 main.go:141] libmachine: Making call to close connection to plugin binary
	I0818 18:47:32.837773 1158634 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0818 18:47:32.839091 1158634 addons.go:510] duration metric: took 901.832388ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0818 18:47:32.898008 1158634 pod_ready.go:93] pod "kube-apiserver-functional-771033" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:32.898024 1158634 pod_ready.go:82] duration metric: took 399.857357ms for pod "kube-apiserver-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:32.898036 1158634 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:33.298097 1158634 pod_ready.go:93] pod "kube-controller-manager-functional-771033" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:33.298110 1158634 pod_ready.go:82] duration metric: took 400.06857ms for pod "kube-controller-manager-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:33.298119 1158634 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-f6krv" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:33.698288 1158634 pod_ready.go:93] pod "kube-proxy-f6krv" in "kube-system" namespace has status "Ready":"True"
	I0818 18:47:33.698301 1158634 pod_ready.go:82] duration metric: took 400.177062ms for pod "kube-proxy-f6krv" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:33.698310 1158634 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-functional-771033" in "kube-system" namespace to be "Ready" ...
	I0818 18:47:33.895948 1158634 pod_ready.go:98] error getting pod "kube-scheduler-functional-771033" in "kube-system" namespace (skipping!): Get "https://192.168.39.95:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-771033": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:33.895974 1158634 pod_ready.go:82] duration metric: took 197.656257ms for pod "kube-scheduler-functional-771033" in "kube-system" namespace to be "Ready" ...
	E0818 18:47:33.895989 1158634 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "kube-scheduler-functional-771033" in "kube-system" namespace (skipping!): Get "https://192.168.39.95:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-771033": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:33.896025 1158634 pod_ready.go:39] duration metric: took 1.736874148s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0818 18:47:33.896052 1158634 api_server.go:52] waiting for apiserver process to appear ...
	I0818 18:47:33.896118 1158634 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:47:33.928906 1158634 api_server.go:72] duration metric: took 1.991655041s to wait for apiserver process to appear ...
	I0818 18:47:33.928924 1158634 api_server.go:88] waiting for apiserver healthz status ...
	I0818 18:47:33.928943 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:33.929527 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:34.429124 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:34.429812 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:34.929088 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:34.929773 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:35.429315 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:35.429886 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:35.929465 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:35.930158 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:36.429759 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:36.430449 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:36.929077 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:36.929815 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:37.429013 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:37.429766 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:37.929364 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:37.930026 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:38.429629 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:38.430312 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:38.929939 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:38.930644 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:39.429275 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:39.429944 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:39.930021 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:39.930702 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:40.429251 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:40.429837 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:40.929429 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:40.930150 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:41.429816 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:41.430536 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:41.929415 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:41.930134 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:42.429257 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:42.429919 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:42.929516 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:42.930179 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:43.429852 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:43.430470 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:43.929096 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:43.929798 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:44.429322 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:44.429974 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:44.929058 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:44.929809 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:45.429322 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:45.429914 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:45.929315 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:45.929933 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:46.429356 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:46.429950 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:46.929328 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:46.929990 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:47.429665 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:47.430256 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:47.929919 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:47.930620 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:48.429157 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:48.429764 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:48.929301 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:48.929944 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:49.429530 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:49.430153 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:49.930004 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:49.930640 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:50.429195 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:50.429922 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:50.929517 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:50.930172 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:51.429842 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:51.430506 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:51.929271 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:51.929918 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:52.429313 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:52.430004 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:52.929695 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:52.930328 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:53.429987 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:53.430606 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:53.929188 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:53.929953 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:54.429528 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:54.430256 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:54.929263 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:54.929956 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:55.429491 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:55.430192 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:55.929811 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:55.930524 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:56.429125 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:56.429891 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:56.929445 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:56.930161 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:57.429803 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:57.430427 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:57.929030 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:57.929778 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:58.429392 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:58.430157 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:58.929751 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:58.930464 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:59.429042 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:59.429804 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:47:59.929735 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:47:59.930400 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:00.430052 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:00.430757 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:00.929355 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:00.929971 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:01.429320 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:01.430139 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:01.929993 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:01.930704 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:02.429269 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:02.430008 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:02.929684 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:02.930409 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:03.430037 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:03.430714 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:03.929298 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:03.930064 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:04.429681 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:04.430287 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:04.929513 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:04.930142 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:05.429874 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:05.430573 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:05.929140 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:05.929854 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:06.429342 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:06.429984 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:06.929529 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:06.930217 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:07.429942 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:07.430546 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:07.929102 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:07.929792 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:08.429396 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:08.430124 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:08.929788 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:08.930461 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:09.429066 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:09.429795 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:09.929600 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:09.930277 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:10.429956 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:10.430615 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:10.929156 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:10.929767 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:11.429491 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:11.430124 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:11.929881 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:11.930424 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:12.428993 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:12.429679 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:12.929803 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:12.930419 1158634 api_server.go:269] stopped: https://192.168.39.95:8441/healthz: Get "https://192.168.39.95:8441/healthz": dial tcp 192.168.39.95:8441: connect: connection refused
	I0818 18:48:13.429991 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:15.156957 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 18:48:15.156985 1158634 api_server.go:103] status: https://192.168.39.95:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 18:48:15.157002 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:15.179828 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0818 18:48:15.179852 1158634 api_server.go:103] status: https://192.168.39.95:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0818 18:48:15.429182 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:15.433603 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 18:48:15.433621 1158634 api_server.go:103] status: https://192.168.39.95:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 18:48:15.929198 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:15.933946 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0818 18:48:15.933968 1158634 api_server.go:103] status: https://192.168.39.95:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0818 18:48:16.429575 1158634 api_server.go:253] Checking apiserver healthz at https://192.168.39.95:8441/healthz ...
	I0818 18:48:16.435519 1158634 api_server.go:279] https://192.168.39.95:8441/healthz returned 200:
	ok
	I0818 18:48:16.444034 1158634 api_server.go:141] control plane version: v1.31.0
	I0818 18:48:16.444051 1158634 api_server.go:131] duration metric: took 42.515122562s to wait for apiserver health ...
	I0818 18:48:16.444060 1158634 system_pods.go:43] waiting for kube-system pods to appear ...
	I0818 18:48:16.450928 1158634 system_pods.go:59] 7 kube-system pods found
	I0818 18:48:16.450944 1158634 system_pods.go:61] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:16.450947 1158634 system_pods.go:61] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:16.450950 1158634 system_pods.go:61] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:16.450953 1158634 system_pods.go:61] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:16.450956 1158634 system_pods.go:61] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:16.450958 1158634 system_pods.go:61] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:16.450964 1158634 system_pods.go:61] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:16.450970 1158634 system_pods.go:74] duration metric: took 6.906098ms to wait for pod list to return data ...
	I0818 18:48:16.450978 1158634 default_sa.go:34] waiting for default service account to be created ...
	I0818 18:48:16.454081 1158634 default_sa.go:45] found service account: "default"
	I0818 18:48:16.454093 1158634 default_sa.go:55] duration metric: took 3.110353ms for default service account to be created ...
	I0818 18:48:16.454100 1158634 system_pods.go:116] waiting for k8s-apps to be running ...
	I0818 18:48:16.459600 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:16.459613 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:16.459619 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:16.459622 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:16.459625 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:16.459628 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:16.459630 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:16.459635 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:16.459651 1158634 retry.go:31] will retry after 211.129846ms: missing components: kube-apiserver
	I0818 18:48:16.676438 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:16.676454 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:16.676458 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:16.676461 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:16.676465 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:16.676467 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:16.676470 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:16.676475 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:16.676490 1158634 retry.go:31] will retry after 311.734201ms: missing components: kube-apiserver
	I0818 18:48:16.993969 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:16.993986 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:16.993990 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:16.993993 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:16.993997 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:16.993999 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:16.994002 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:16.994007 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:16.994021 1158634 retry.go:31] will retry after 317.73006ms: missing components: kube-apiserver
	I0818 18:48:17.317902 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:17.317919 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:17.317923 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:17.317926 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:17.317929 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:17.317931 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:17.317934 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:17.317939 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:17.317956 1158634 retry.go:31] will retry after 594.967704ms: missing components: kube-apiserver
	I0818 18:48:17.918853 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:17.918870 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:17.918874 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:17.918878 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:17.918881 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:17.918883 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:17.918886 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:17.918892 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:17.918905 1158634 retry.go:31] will retry after 682.693224ms: missing components: kube-apiserver
	I0818 18:48:18.606687 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:18.606704 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:18.606708 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:18.606711 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:18.606714 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:18.606716 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:18.606719 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:18.606724 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:18.606738 1158634 retry.go:31] will retry after 627.683554ms: missing components: kube-apiserver
	I0818 18:48:19.239862 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:19.239879 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:19.239883 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:19.239885 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:19.239888 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:19.239891 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:19.239893 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:19.239898 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:19.239911 1158634 retry.go:31] will retry after 736.394257ms: missing components: kube-apiserver
	I0818 18:48:19.983335 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:19.983355 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:19.983359 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:19.983362 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:19.983366 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:19.983368 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:19.983370 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:19.983378 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:19.983391 1158634 retry.go:31] will retry after 1.006828126s: missing components: kube-apiserver
	I0818 18:48:20.995239 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:20.995255 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:20.995260 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:20.995264 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:20.995267 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:20.995269 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:20.995271 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:20.995277 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:20.995291 1158634 retry.go:31] will retry after 1.577247013s: missing components: kube-apiserver
	I0818 18:48:22.578942 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:22.578958 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:22.578962 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:22.578965 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:22.578969 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:22.578971 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:22.578974 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:22.578978 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:22.578993 1158634 retry.go:31] will retry after 2.241204737s: missing components: kube-apiserver
	I0818 18:48:24.825327 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:24.825343 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:24.825347 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:24.825350 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:24.825353 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:24.825356 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:24.825358 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:24.825363 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:24.825379 1158634 retry.go:31] will retry after 1.953357523s: missing components: kube-apiserver
	I0818 18:48:26.786001 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:26.786018 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:26.786022 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:26.786025 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Pending
	I0818 18:48:26.786028 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:26.786031 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:26.786033 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:26.786038 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:26.786052 1158634 retry.go:31] will retry after 2.541621745s: missing components: kube-apiserver
	I0818 18:48:29.335628 1158634 system_pods.go:86] 7 kube-system pods found
	I0818 18:48:29.335645 1158634 system_pods.go:89] "coredns-6f6b679f8f-jr2fb" [590318eb-621f-4f74-b5be-0b6268a28d4d] Running
	I0818 18:48:29.335649 1158634 system_pods.go:89] "etcd-functional-771033" [198cabf4-335e-4a96-b8ff-296969689489] Running
	I0818 18:48:29.335655 1158634 system_pods.go:89] "kube-apiserver-functional-771033" [4186abeb-76d6-4bf5-beef-2994f20dcef1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0818 18:48:29.335659 1158634 system_pods.go:89] "kube-controller-manager-functional-771033" [18d5e8f2-ab21-4d53-a1d6-2259043375d6] Running
	I0818 18:48:29.335663 1158634 system_pods.go:89] "kube-proxy-f6krv" [6d61848c-ac48-4004-bbfd-99325c6c6b5e] Running
	I0818 18:48:29.335665 1158634 system_pods.go:89] "kube-scheduler-functional-771033" [17e856bc-5dd1-4979-8d62-5ea894a05851] Running
	I0818 18:48:29.335670 1158634 system_pods.go:89] "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0818 18:48:29.335677 1158634 system_pods.go:126] duration metric: took 12.881572523s to wait for k8s-apps to be running ...
	I0818 18:48:29.335684 1158634 system_svc.go:44] waiting for kubelet service to be running ....
	I0818 18:48:29.335735 1158634 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:48:29.351705 1158634 system_svc.go:56] duration metric: took 16.009623ms WaitForService to wait for kubelet
	I0818 18:48:29.351728 1158634 kubeadm.go:582] duration metric: took 57.41448323s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0818 18:48:29.351753 1158634 node_conditions.go:102] verifying NodePressure condition ...
	I0818 18:48:29.354857 1158634 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0818 18:48:29.354869 1158634 node_conditions.go:123] node cpu capacity is 2
	I0818 18:48:29.354879 1158634 node_conditions.go:105] duration metric: took 3.122283ms to run NodePressure ...
	I0818 18:48:29.354890 1158634 start.go:241] waiting for startup goroutines ...
	I0818 18:48:29.354896 1158634 start.go:246] waiting for cluster config update ...
	I0818 18:48:29.354906 1158634 start.go:255] writing updated cluster config ...
	I0818 18:48:29.355209 1158634 ssh_runner.go:195] Run: rm -f paused
	I0818 18:48:29.406644 1158634 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0818 18:48:29.409136 1158634 out.go:177] * Done! kubectl is now configured to use "functional-771033" cluster and "default" namespace by default
	
	
	==> Docker <==
	Aug 18 18:48:03 functional-771033 dockerd[5825]: time="2024-08-18T18:48:03.848795158Z" level=info msg="ignoring event" container=bffd4da2399ae4c89e95ebd31f42111939d5c8690a99d5eb5dfaa88f60d43963 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 18:48:03 functional-771033 dockerd[5832]: time="2024-08-18T18:48:03.851754763Z" level=info msg="shim disconnected" id=bffd4da2399ae4c89e95ebd31f42111939d5c8690a99d5eb5dfaa88f60d43963 namespace=moby
	Aug 18 18:48:03 functional-771033 dockerd[5832]: time="2024-08-18T18:48:03.852015195Z" level=warning msg="cleaning up after shim disconnected" id=bffd4da2399ae4c89e95ebd31f42111939d5c8690a99d5eb5dfaa88f60d43963 namespace=moby
	Aug 18 18:48:03 functional-771033 dockerd[5832]: time="2024-08-18T18:48:03.852240409Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 18:48:03 functional-771033 dockerd[5832]: time="2024-08-18T18:48:03.933604742Z" level=info msg="shim disconnected" id=c2c599815a6cb42a94b4f95053ec338304d970684fee6b6dc9e135d8dc496318 namespace=moby
	Aug 18 18:48:03 functional-771033 dockerd[5825]: time="2024-08-18T18:48:03.933831366Z" level=info msg="ignoring event" container=c2c599815a6cb42a94b4f95053ec338304d970684fee6b6dc9e135d8dc496318 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 18:48:03 functional-771033 dockerd[5832]: time="2024-08-18T18:48:03.934720943Z" level=warning msg="cleaning up after shim disconnected" id=c2c599815a6cb42a94b4f95053ec338304d970684fee6b6dc9e135d8dc496318 namespace=moby
	Aug 18 18:48:03 functional-771033 dockerd[5832]: time="2024-08-18T18:48:03.934807313Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 18:48:05 functional-771033 dockerd[5832]: time="2024-08-18T18:48:05.894373987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:48:05 functional-771033 dockerd[5832]: time="2024-08-18T18:48:05.894788213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:48:05 functional-771033 dockerd[5832]: time="2024-08-18T18:48:05.894901688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:05 functional-771033 dockerd[5832]: time="2024-08-18T18:48:05.895018386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:05 functional-771033 dockerd[5832]: time="2024-08-18T18:48:05.968799841Z" level=info msg="shim disconnected" id=448705e7b9ad526d3670cbbd5fa7edb60a61c1a7e47c95e8090cc8e7590bc712 namespace=moby
	Aug 18 18:48:05 functional-771033 dockerd[5832]: time="2024-08-18T18:48:05.968876562Z" level=warning msg="cleaning up after shim disconnected" id=448705e7b9ad526d3670cbbd5fa7edb60a61c1a7e47c95e8090cc8e7590bc712 namespace=moby
	Aug 18 18:48:05 functional-771033 dockerd[5832]: time="2024-08-18T18:48:05.968886922Z" level=info msg="cleaning up dead shim" namespace=moby
	Aug 18 18:48:05 functional-771033 dockerd[5825]: time="2024-08-18T18:48:05.969651353Z" level=info msg="ignoring event" container=448705e7b9ad526d3670cbbd5fa7edb60a61c1a7e47c95e8090cc8e7590bc712 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Aug 18 18:48:12 functional-771033 dockerd[5832]: time="2024-08-18T18:48:12.916194905Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:48:12 functional-771033 dockerd[5832]: time="2024-08-18T18:48:12.918841978Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:48:12 functional-771033 dockerd[5832]: time="2024-08-18T18:48:12.918980547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:12 functional-771033 dockerd[5832]: time="2024-08-18T18:48:12.919215840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:12 functional-771033 cri-dockerd[6114]: time="2024-08-18T18:48:12Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1f80ee99e4a9011eccb3bbd8dc1ca93a4528c0c012f67644b0ac6de420624440/resolv.conf as [nameserver 192.168.122.1]"
	Aug 18 18:48:13 functional-771033 dockerd[5832]: time="2024-08-18T18:48:13.077184005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Aug 18 18:48:13 functional-771033 dockerd[5832]: time="2024-08-18T18:48:13.077332893Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Aug 18 18:48:13 functional-771033 dockerd[5832]: time="2024-08-18T18:48:13.077373493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Aug 18 18:48:13 functional-771033 dockerd[5832]: time="2024-08-18T18:48:13.077494219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1df53b51ee39c       604f5db92eaa8       17 seconds ago       Running             kube-apiserver            0                   1f80ee99e4a90       kube-apiserver-functional-771033
	448705e7b9ad5       6e38f40d628db       25 seconds ago       Exited              storage-provisioner       5                   4ffbabb8af4ee       storage-provisioner
	a276c2d0de3ca       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   95b27eb8893a1       coredns-6f6b679f8f-jr2fb
	5162d714630ec       ad83b2ca7b09e       About a minute ago   Running             kube-proxy                3                   7fd05b94cc3a5       kube-proxy-f6krv
	0565a39bb52bf       2e96e5913fc06       About a minute ago   Running             etcd                      3                   995ba9fbea38b       etcd-functional-771033
	4854ca3a4bea4       045733566833c       About a minute ago   Running             kube-controller-manager   3                   ed1547c0983af       kube-controller-manager-functional-771033
	354afb0d0718f       1766f54c897f0       About a minute ago   Running             kube-scheduler            3                   8dbc777d2d2aa       kube-scheduler-functional-771033
	592a9b17d86a8       2e96e5913fc06       About a minute ago   Created             etcd                      2                   d4a6727a157c5       etcd-functional-771033
	8e0a450f68a4c       ad83b2ca7b09e       About a minute ago   Created             kube-proxy                2                   cde473f8e984a       kube-proxy-f6krv
	c4de3f53a26dc       045733566833c       About a minute ago   Created             kube-controller-manager   2                   c5771ab1f6cee       kube-controller-manager-functional-771033
	e1a825e0ca445       1766f54c897f0       About a minute ago   Exited              kube-scheduler            2                   145b77ad63eca       kube-scheduler-functional-771033
	3537ffdbdfbb8       cbb01a7bd410d       About a minute ago   Exited              coredns                   1                   163d5c0124cc7       coredns-6f6b679f8f-jr2fb
	
	
	==> coredns [3537ffdbdfbb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:53610 - 9083 "HINFO IN 7019311184401319666.9013614242261889566. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020742812s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a276c2d0de3c] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:35392 - 15210 "HINFO IN 9149333849223737842.3715552012028273754. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029200035s
	
	
	==> describe nodes <==
	Name:               functional-771033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-771033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3607dd695a2685a662a9ebe804e6840665786af5
	                    minikube.k8s.io/name=functional-771033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_18T18_45_52_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 18 Aug 2024 18:45:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-771033
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 18 Aug 2024 18:48:29 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Sun, 18 Aug 2024 18:47:18 +0000   Sun, 18 Aug 2024 18:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Sun, 18 Aug 2024 18:47:18 +0000   Sun, 18 Aug 2024 18:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Sun, 18 Aug 2024 18:47:18 +0000   Sun, 18 Aug 2024 18:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Sun, 18 Aug 2024 18:47:18 +0000   Sun, 18 Aug 2024 18:48:16 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.95
	  Hostname:    functional-771033
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 c1d1b69e9dcc4cf9ae080243f90e5167
	  System UUID:                c1d1b69e-9dcc-4cf9-ae08-0243f90e5167
	  Boot ID:                    092eefd9-433d-419e-969a-6e2fd93db016
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.1.2
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6f6b679f8f-jr2fb                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m33s
	  kube-system                 etcd-functional-771033                       100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m39s
	  kube-system                 kube-apiserver-functional-771033             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 kube-controller-manager-functional-771033    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-proxy-f6krv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m33s
	  kube-system                 kube-scheduler-functional-771033             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (4%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 70s                    kube-proxy       
	  Normal  Starting                 117s                   kube-proxy       
	  Normal  Starting                 2m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m44s (x8 over 2m44s)  kubelet          Node functional-771033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m44s (x8 over 2m44s)  kubelet          Node functional-771033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m44s (x7 over 2m44s)  kubelet          Node functional-771033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeAllocatableEnforced  2m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     2m38s                  kubelet          Node functional-771033 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m38s                  kubelet          Node functional-771033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s                  kubelet          Node functional-771033 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                2m36s                  kubelet          Node functional-771033 status is now: NodeReady
	  Normal  RegisteredNode           2m34s                  node-controller  Node functional-771033 event: Registered Node functional-771033 in Controller
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)    kubelet          Node functional-771033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)    kubelet          Node functional-771033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)    kubelet          Node functional-771033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           116s                   node-controller  Node functional-771033 event: Registered Node functional-771033 in Controller
	  Normal  NodeHasSufficientMemory  77s (x8 over 77s)      kubelet          Node functional-771033 status is now: NodeHasSufficientMemory
	  Normal  Starting                 77s                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    77s (x8 over 77s)      kubelet          Node functional-771033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     77s (x7 over 77s)      kubelet          Node functional-771033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  77s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                    node-controller  Node functional-771033 event: Registered Node functional-771033 in Controller
	  Normal  NodeNotReady             14s                    node-controller  Node functional-771033 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.139694] systemd-fstab-generator[3710]: Ignoring "noauto" option for root device
	[  +0.160021] systemd-fstab-generator[3725]: Ignoring "noauto" option for root device
	[  +0.520033] systemd-fstab-generator[3899]: Ignoring "noauto" option for root device
	[  +1.808467] systemd-fstab-generator[4020]: Ignoring "noauto" option for root device
	[  +0.067780] kauditd_printk_skb: 137 callbacks suppressed
	[  +5.008156] kauditd_printk_skb: 74 callbacks suppressed
	[  +7.496805] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.647699] systemd-fstab-generator[4925]: Ignoring "noauto" option for root device
	[ +11.035497] systemd-fstab-generator[5348]: Ignoring "noauto" option for root device
	[  +0.091843] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.228503] systemd-fstab-generator[5383]: Ignoring "noauto" option for root device
	[  +0.156320] systemd-fstab-generator[5395]: Ignoring "noauto" option for root device
	[  +0.171386] systemd-fstab-generator[5409]: Ignoring "noauto" option for root device
	[Aug18 18:47] kauditd_printk_skb: 91 callbacks suppressed
	[  +7.519969] systemd-fstab-generator[6062]: Ignoring "noauto" option for root device
	[  +0.135117] systemd-fstab-generator[6074]: Ignoring "noauto" option for root device
	[  +0.128926] systemd-fstab-generator[6086]: Ignoring "noauto" option for root device
	[  +0.157908] systemd-fstab-generator[6101]: Ignoring "noauto" option for root device
	[  +0.510684] systemd-fstab-generator[6278]: Ignoring "noauto" option for root device
	[  +2.746239] systemd-fstab-generator[6983]: Ignoring "noauto" option for root device
	[  +1.015203] kauditd_printk_skb: 205 callbacks suppressed
	[  +5.141677] kauditd_printk_skb: 42 callbacks suppressed
	[ +12.255339] systemd-fstab-generator[8158]: Ignoring "noauto" option for root device
	[  +0.107207] kauditd_printk_skb: 19 callbacks suppressed
	[  +5.831869] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [0565a39bb52b] <==
	{"level":"info","ts":"2024-08-18T18:47:15.463115Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-18T18:47:15.469622Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"a71e7bac075997","initial-advertise-peer-urls":["https://192.168.39.95:2380"],"listen-peer-urls":["https://192.168.39.95:2380"],"advertise-client-urls":["https://192.168.39.95:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.95:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-18T18:47:15.469639Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-18T18:47:15.470448Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-18T18:47:15.475119Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-08-18T18:47:15.475361Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","added-peer-id":"a71e7bac075997","added-peer-peer-urls":["https://192.168.39.95:2380"]}
	{"level":"info","ts":"2024-08-18T18:47:15.478269Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"986e33f48d4d13ba","local-member-id":"a71e7bac075997","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T18:47:15.478318Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-18T18:47:15.489302Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.39.95:2380"}
	{"level":"info","ts":"2024-08-18T18:47:16.789525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 is starting a new election at term 3"}
	{"level":"info","ts":"2024-08-18T18:47:16.789583Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-08-18T18:47:16.789631Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgPreVoteResp from a71e7bac075997 at term 3"}
	{"level":"info","ts":"2024-08-18T18:47:16.789645Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became candidate at term 4"}
	{"level":"info","ts":"2024-08-18T18:47:16.789672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 received MsgVoteResp from a71e7bac075997 at term 4"}
	{"level":"info","ts":"2024-08-18T18:47:16.789683Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a71e7bac075997 became leader at term 4"}
	{"level":"info","ts":"2024-08-18T18:47:16.789690Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a71e7bac075997 elected leader a71e7bac075997 at term 4"}
	{"level":"info","ts":"2024-08-18T18:47:16.795260Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"a71e7bac075997","local-member-attributes":"{Name:functional-771033 ClientURLs:[https://192.168.39.95:2379]}","request-path":"/0/members/a71e7bac075997/attributes","cluster-id":"986e33f48d4d13ba","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-18T18:47:16.795298Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T18:47:16.795794Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-18T18:47:16.795893Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-18T18:47:16.795806Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-18T18:47:16.796558Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T18:47:16.797016Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-18T18:47:16.797549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-18T18:47:16.798204Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.95:2379"}
	
	
	==> etcd [592a9b17d86a] <==
	
	
	==> kernel <==
	 18:48:30 up 3 min,  0 users,  load average: 1.26, 0.93, 0.38
	Linux functional-771033 5.10.207 #1 SMP Thu Aug 15 21:30:57 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1df53b51ee39] <==
	I0818 18:48:15.052315       1 crdregistration_controller.go:114] Starting crd-autoregister controller
	I0818 18:48:15.052455       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0818 18:48:15.123969       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0818 18:48:15.125139       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0818 18:48:15.201583       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0818 18:48:15.201623       1 policy_source.go:224] refreshing policies
	I0818 18:48:15.235522       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0818 18:48:15.245135       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0818 18:48:15.245170       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0818 18:48:15.246471       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0818 18:48:15.250525       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0818 18:48:15.250962       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0818 18:48:15.251708       1 shared_informer.go:320] Caches are synced for configmaps
	I0818 18:48:15.252948       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0818 18:48:15.253466       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0818 18:48:15.253708       1 aggregator.go:171] initial CRD sync complete...
	I0818 18:48:15.253752       1 autoregister_controller.go:144] Starting autoregister controller
	I0818 18:48:15.253770       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0818 18:48:15.253789       1 cache.go:39] Caches are synced for autoregister controller
	I0818 18:48:15.254818       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I0818 18:48:15.298292       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0818 18:48:16.050946       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0818 18:48:16.264604       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.95]
	I0818 18:48:16.266283       1 controller.go:615] quota admission added evaluator for: endpoints
	I0818 18:48:16.270748       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4854ca3a4bea] <==
	E0818 18:48:11.433388       1 node_lifecycle_controller.go:720] "Failed while getting a Node to retry updating node health. Probably Node was deleted" logger="node-lifecycle-controller" node="functional-771033"
	E0818 18:48:11.433467       1 node_lifecycle_controller.go:725] "Update health of Node from Controller error, Skipping - no pods will be evicted" err="Get \"https://192.168.39.95:8441/api/v1/nodes/functional-771033\": dial tcp 192.168.39.95:8441: connect: connection refused" logger="node-lifecycle-controller" node=""
	E0818 18:48:15.148321       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Lease: unknown (get leases.coordination.k8s.io)" logger="UnhandledError"
	E0818 18:48:15.148572       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ClusterRole: unknown (get clusterroles.rbac.authorization.k8s.io)" logger="UnhandledError"
	E0818 18:48:15.148676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ValidatingAdmissionPolicyBinding: unknown (get validatingadmissionpolicybindings.admissionregistration.k8s.io)" logger="UnhandledError"
	E0818 18:48:15.148783       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError"
	E0818 18:48:15.148855       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Role: unknown (get roles.rbac.authorization.k8s.io)" logger="UnhandledError"
	E0818 18:48:15.148935       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityClass: unknown (get priorityclasses.scheduling.k8s.io)" logger="UnhandledError"
	E0818 18:48:15.149013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: unknown (get csinodes.storage.k8s.io)" logger="UnhandledError"
	E0818 18:48:15.149096       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ClusterRoleBinding: unknown (get clusterrolebindings.rbac.authorization.k8s.io)" logger="UnhandledError"
	E0818 18:48:15.151216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ControllerRevision: unknown (get controllerrevisions.apps)" logger="UnhandledError"
	E0818 18:48:15.151340       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PriorityLevelConfiguration: unknown (get prioritylevelconfigurations.flowcontrol.apiserver.k8s.io)" logger="UnhandledError"
	I0818 18:48:16.434718       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0818 18:48:16.476250       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-771033" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-771033\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-771033, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 50144038-6f50-40ec-91d8-5c6157da045a, UID in object meta: 4186abeb-76d6-4bf5-beef-2994f20dcef1"
	E0818 18:48:16.504706       1 node_lifecycle_controller.go:758] "Unhandled Error" err="unable to mark all pods NotReady on node functional-771033: Operation cannot be fulfilled on pods \"kube-apiserver-functional-771033\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-771033, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 50144038-6f50-40ec-91d8-5c6157da045a, UID in object meta: 4186abeb-76d6-4bf5-beef-2994f20dcef1; queuing for retry" logger="UnhandledError"
	I0818 18:48:16.506179       1 node_lifecycle_controller.go:1036] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	E0818 18:48:21.511148       1 node_lifecycle_controller.go:978] "Error updating node" err="Operation cannot be fulfilled on nodes \"functional-771033\": the object has been modified; please apply your changes to the latest version and try again" logger="node-lifecycle-controller" node="functional-771033"
	I0818 18:48:21.538466       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/coredns-6f6b679f8f-jr2fb" err="Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-jr2fb\": the object has been modified; please apply your changes to the latest version and try again"
	I0818 18:48:21.543860       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/etcd-functional-771033" err="Operation cannot be fulfilled on pods \"etcd-functional-771033\": the object has been modified; please apply your changes to the latest version and try again"
	I0818 18:48:21.547500       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-apiserver-functional-771033" err="Operation cannot be fulfilled on pods \"kube-apiserver-functional-771033\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-771033, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 50144038-6f50-40ec-91d8-5c6157da045a, UID in object meta: 4186abeb-76d6-4bf5-beef-2994f20dcef1"
	I0818 18:48:21.552790       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-controller-manager-functional-771033" err="Operation cannot be fulfilled on pods \"kube-controller-manager-functional-771033\": the object has been modified; please apply your changes to the latest version and try again"
	I0818 18:48:21.557030       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-proxy-f6krv" err="Operation cannot be fulfilled on pods \"kube-proxy-f6krv\": the object has been modified; please apply your changes to the latest version and try again"
	I0818 18:48:21.563494       1 controller_utils.go:151] "Failed to update status for pod" logger="node-lifecycle-controller" pod="kube-system/kube-scheduler-functional-771033" err="Operation cannot be fulfilled on pods \"kube-scheduler-functional-771033\": the object has been modified; please apply your changes to the latest version and try again"
	E0818 18:48:21.563782       1 node_lifecycle_controller.go:758] "Unhandled Error" err="unable to mark all pods NotReady on node functional-771033: [Operation cannot be fulfilled on pods \"coredns-6f6b679f8f-jr2fb\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"etcd-functional-771033\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-apiserver-functional-771033\": StorageError: invalid object, Code: 4, Key: /registry/pods/kube-system/kube-apiserver-functional-771033, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 50144038-6f50-40ec-91d8-5c6157da045a, UID in object meta: 4186abeb-76d6-4bf5-beef-2994f20dcef1, Operation cannot be fulfilled on pods \"kube-controller-manager-functional-771033\": the object has been modified; please apply your changes to the latest version and try again, Operation
cannot be fulfilled on pods \"kube-proxy-f6krv\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on pods \"kube-scheduler-functional-771033\": the object has been modified; please apply your changes to the latest version and try again]; queuing for retry" logger="UnhandledError"
	I0818 18:48:26.565584       1 node_lifecycle_controller.go:1055] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [c4de3f53a26d] <==
	
	
	==> kube-proxy [5162d714630e] <==
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0818 18:47:20.049484       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0818 18:47:20.059806       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.95"]
	E0818 18:47:20.059920       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0818 18:47:20.117170       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I0818 18:47:20.117235       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0818 18:47:20.117265       1 server_linux.go:169] "Using iptables Proxier"
	I0818 18:47:20.127750       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0818 18:47:20.129332       1 server.go:483] "Version info" version="v1.31.0"
	I0818 18:47:20.130721       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:47:20.134008       1 config.go:197] "Starting service config controller"
	I0818 18:47:20.134047       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0818 18:47:20.134124       1 config.go:104] "Starting endpoint slice config controller"
	I0818 18:47:20.134128       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0818 18:47:20.134415       1 config.go:326] "Starting node config controller"
	I0818 18:47:20.134439       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0818 18:47:20.234857       1 shared_informer.go:320] Caches are synced for node config
	I0818 18:47:20.235048       1 shared_informer.go:320] Caches are synced for service config
	I0818 18:47:20.235106       1 shared_informer.go:320] Caches are synced for endpoint slice config
	E0818 18:48:15.233453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.EndpointSlice: unknown (get endpointslices.discovery.k8s.io)" logger="UnhandledError"
	
	
	==> kube-proxy [8e0a450f68a4] <==
	
	
	==> kube-scheduler [354afb0d0718] <==
	I0818 18:47:16.066688       1 serving.go:386] Generated self-signed cert in-memory
	W0818 18:47:18.085113       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0818 18:47:18.085153       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0818 18:47:18.085453       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0818 18:47:18.085802       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0818 18:47:18.162988       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0"
	I0818 18:47:18.163025       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0818 18:47:18.165858       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0818 18:47:18.165967       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0818 18:47:18.166388       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0818 18:47:18.166463       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0818 18:47:18.267309       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0818 18:48:15.128685       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: unknown (get persistentvolumes)" logger="UnhandledError"
	E0818 18:48:15.129146       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: unknown (get replicasets.apps)" logger="UnhandledError"
	E0818 18:48:15.129397       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: unknown (get namespaces)" logger="UnhandledError"
	
	
	==> kube-scheduler [e1a825e0ca44] <==
	
	
	==> kubelet <==
	Aug 18 18:48:04 functional-771033 kubelet[6990]: E0818 18:48:04.453245    6990 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: a6bf6cd3f233c984cd52bcff2a0a68b29dce520e47aa06f32f573e0f5b452c05" containerID="a6bf6cd3f233c984cd52bcff2a0a68b29dce520e47aa06f32f573e0f5b452c05"
	Aug 18 18:48:04 functional-771033 kubelet[6990]: I0818 18:48:04.453278    6990 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"a6bf6cd3f233c984cd52bcff2a0a68b29dce520e47aa06f32f573e0f5b452c05"} err="failed to get container status \"a6bf6cd3f233c984cd52bcff2a0a68b29dce520e47aa06f32f573e0f5b452c05\": rpc error: code = Unknown desc = Error response from daemon: No such container: a6bf6cd3f233c984cd52bcff2a0a68b29dce520e47aa06f32f573e0f5b452c05"
	Aug 18 18:48:05 functional-771033 kubelet[6990]: E0818 18:48:05.588219    6990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-771033?timeout=10s\": dial tcp 192.168.39.95:8441: connect: connection refused" interval="7s"
	Aug 18 18:48:05 functional-771033 kubelet[6990]: I0818 18:48:05.812866    6990 scope.go:117] "RemoveContainer" containerID="e552016113ce1766dc6af57b26ade7a78db21fafe0b4fbb31bed5694116effca"
	Aug 18 18:48:05 functional-771033 kubelet[6990]: I0818 18:48:05.818722    6990 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16cc9ffa28d8b1489ccffe7a75276630" path="/var/lib/kubelet/pods/16cc9ffa28d8b1489ccffe7a75276630/volumes"
	Aug 18 18:48:06 functional-771033 kubelet[6990]: I0818 18:48:06.440394    6990 scope.go:117] "RemoveContainer" containerID="e552016113ce1766dc6af57b26ade7a78db21fafe0b4fbb31bed5694116effca"
	Aug 18 18:48:06 functional-771033 kubelet[6990]: I0818 18:48:06.440650    6990 scope.go:117] "RemoveContainer" containerID="448705e7b9ad526d3670cbbd5fa7edb60a61c1a7e47c95e8090cc8e7590bc712"
	Aug 18 18:48:06 functional-771033 kubelet[6990]: E0818 18:48:06.440755    6990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(01c74b7d-d168-47d2-8415-af0dcd45453e)\"" pod="kube-system/storage-provisioner" podUID="01c74b7d-d168-47d2-8415-af0dcd45453e"
	Aug 18 18:48:06 functional-771033 kubelet[6990]: I0818 18:48:06.441559    6990 status_manager.go:851] "Failed to get status for pod" podUID="01c74b7d-d168-47d2-8415-af0dcd45453e" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.95:8441: connect: connection refused"
	Aug 18 18:48:12 functional-771033 kubelet[6990]: E0818 18:48:12.590374    6990 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8441/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/functional-771033?timeout=10s\": dial tcp 192.168.39.95:8441: connect: connection refused" interval="7s"
	Aug 18 18:48:12 functional-771033 kubelet[6990]: I0818 18:48:12.813423    6990 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-771033" podUID="50144038-6f50-40ec-91d8-5c6157da045a"
	Aug 18 18:48:12 functional-771033 kubelet[6990]: I0818 18:48:12.814273    6990 status_manager.go:851] "Failed to get status for pod" podUID="01c74b7d-d168-47d2-8415-af0dcd45453e" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/storage-provisioner\": dial tcp 192.168.39.95:8441: connect: connection refused"
	Aug 18 18:48:12 functional-771033 kubelet[6990]: E0818 18:48:12.814421    6990 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-771033\": dial tcp 192.168.39.95:8441: connect: connection refused" pod="kube-system/kube-apiserver-functional-771033"
	Aug 18 18:48:13 functional-771033 kubelet[6990]: I0818 18:48:13.518155    6990 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-771033" podUID="50144038-6f50-40ec-91d8-5c6157da045a"
	Aug 18 18:48:13 functional-771033 kubelet[6990]: E0818 18:48:13.839969    6990 iptables.go:577] "Could not set up iptables canary" err=<
	Aug 18 18:48:13 functional-771033 kubelet[6990]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Aug 18 18:48:13 functional-771033 kubelet[6990]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Aug 18 18:48:13 functional-771033 kubelet[6990]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Aug 18 18:48:13 functional-771033 kubelet[6990]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Aug 18 18:48:15 functional-771033 kubelet[6990]: I0818 18:48:15.273598    6990 kubelet.go:1900] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-functional-771033"
	Aug 18 18:48:15 functional-771033 kubelet[6990]: I0818 18:48:15.538733    6990 kubelet.go:1895] "Trying to delete pod" pod="kube-system/kube-apiserver-functional-771033" podUID="50144038-6f50-40ec-91d8-5c6157da045a"
	Aug 18 18:48:18 functional-771033 kubelet[6990]: I0818 18:48:18.813044    6990 scope.go:117] "RemoveContainer" containerID="448705e7b9ad526d3670cbbd5fa7edb60a61c1a7e47c95e8090cc8e7590bc712"
	Aug 18 18:48:18 functional-771033 kubelet[6990]: E0818 18:48:18.813253    6990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(01c74b7d-d168-47d2-8415-af0dcd45453e)\"" pod="kube-system/storage-provisioner" podUID="01c74b7d-d168-47d2-8415-af0dcd45453e"
	Aug 18 18:48:29 functional-771033 kubelet[6990]: I0818 18:48:29.813453    6990 scope.go:117] "RemoveContainer" containerID="448705e7b9ad526d3670cbbd5fa7edb60a61c1a7e47c95e8090cc8e7590bc712"
	Aug 18 18:48:29 functional-771033 kubelet[6990]: E0818 18:48:29.813702    6990 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(01c74b7d-d168-47d2-8415-af0dcd45453e)\"" pod="kube-system/storage-provisioner" podUID="01c74b7d-d168-47d2-8415-af0dcd45453e"
	
	
	==> storage-provisioner [448705e7b9ad] <==
	I0818 18:48:05.953141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0818 18:48:05.954879       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-771033 -n functional-771033
helpers_test.go:261: (dbg) Run:  kubectl --context functional-771033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/ComponentHealth FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/ComponentHealth (1.60s)

                                                
                                    
x
+
TestGvisorAddon (6.33s)

                                                
                                                
=== RUN   TestGvisorAddon
=== PAUSE TestGvisorAddon

                                                
                                                

                                                
                                                
=== CONT  TestGvisorAddon
gvisor_addon_test.go:52: (dbg) Run:  out/minikube-linux-amd64 start -p gvisor-313262 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 
E0818 19:37:37.504732 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
gvisor_addon_test.go:52: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p gvisor-313262 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 : exit status 63 (6.067412964s)

                                                
                                                
-- stdout --
	* [gvisor-313262] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to PROVIDER_KVM2_NOT_RUNNING: /usr/bin/virsh domcapabilities --virttype kvm timed out
	* Suggestion: Check that the libvirtd service is running and the socket is ready
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/

                                                
                                                
** /stderr **
gvisor_addon_test.go:54: failed to start minikube: args "out/minikube-linux-amd64 start -p gvisor-313262 --memory=2200 --container-runtime=containerd --docker-opt containerd=/var/run/containerd/containerd.sock --driver=kvm2 ": exit status 63
gvisor_addon_test.go:42: (dbg) Run:  kubectl --context gvisor-313262 logs gvisor -n kube-system
gvisor_addon_test.go:42: (dbg) Non-zero exit: kubectl --context gvisor-313262 logs gvisor -n kube-system: exit status 1 (49.328042ms)

                                                
                                                
** stderr ** 
	error: context "gvisor-313262" does not exist

                                                
                                                
** /stderr **
gvisor_addon_test.go:44: failed to get gvisor post-mortem logs: exit status 1
gvisor_addon_test.go:46: gvisor post-mortem: kubectl --context gvisor-313262 logs gvisor -n kube-system:

                                                
                                                
** stderr ** 
	error: context "gvisor-313262" does not exist

                                                
                                                
** /stderr **
gvisor_addon_test.go:48: *** TestGvisorAddon FAILED at 2024-08-18 19:37:40.297516454 +0000 UTC m=+3583.088713028
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p gvisor-313262 -n gvisor-313262
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p gvisor-313262 -n gvisor-313262: exit status 85 (56.120893ms)

                                                
                                                
-- stdout --
	* Profile "gvisor-313262" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p gvisor-313262"

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 85 (may be ok)
helpers_test.go:241: "gvisor-313262" host is not running, skipping log retrieval (state="* Profile \"gvisor-313262\" not found. Run \"minikube profile list\" to view all profiles.\n  To start a cluster, run: \"minikube start -p gvisor-313262\"")
helpers_test.go:175: Cleaning up "gvisor-313262" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p gvisor-313262
--- FAIL: TestGvisorAddon (6.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (99.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932889 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-932889 --no-kubernetes --driver=kvm2 : exit status 90 (1m39.115558051s)

                                                
                                                
-- stdout --
	* [NoKubernetes-932889] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting minikube without Kubernetes in cluster NoKubernetes-932889
	* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Aug 18 19:40:21 NoKubernetes-932889 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:21.615280239Z" level=info msg="Starting up"
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:21.616389161Z" level=info msg="containerd not running, starting managed containerd"
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:21.617213929Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=532
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.646897808Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.672746403Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.672916894Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.673120292Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.673215075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.673369091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.673433432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.673718460Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.673819103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.673892547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.673953718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.674170297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.674536944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.676971478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.677128416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.677403792Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.677486404Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.677645197Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.677760801Z" level=info msg="metadata content store policy set" policy=shared
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.690758785Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.690929679Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.691009996Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.691133664Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.691205356Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.691370525Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.691914363Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692202078Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692286179Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692347776Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692401491Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692465904Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692535019Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692592638Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692649413Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692704907Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692757401Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692811043Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692889476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692943640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.692998037Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693148912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693233641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693292547Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693343628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693402746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693460805Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693515875Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693566914Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693617863Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693669574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693733047Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693807074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693859709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.693936669Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.694151261Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.694239204Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.694293218Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.694344500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.694394734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.694444749Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.694513793Z" level=info msg="NRI interface is disabled by configuration."
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.695139876Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.695295697Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.695388005Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Aug 18 19:40:21 NoKubernetes-932889 dockerd[532]: time="2024-08-18T19:40:21.695447631Z" level=info msg="containerd successfully booted in 0.049872s"
	Aug 18 19:40:22 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:22.656077138Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Aug 18 19:40:22 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:22.674828739Z" level=info msg="Loading containers: start."
	Aug 18 19:40:22 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:22.785200353Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Aug 18 19:40:22 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:22.919795501Z" level=info msg="Loading containers: done."
	Aug 18 19:40:22 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:22.934418786Z" level=info msg="Docker daemon" commit=f9522e5 containerd-snapshotter=false storage-driver=overlay2 version=27.1.2
	Aug 18 19:40:22 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:22.934589211Z" level=info msg="Daemon has completed initialization"
	Aug 18 19:40:22 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:22.997096459Z" level=info msg="API listen on /var/run/docker.sock"
	Aug 18 19:40:22 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:22.997194205Z" level=info msg="API listen on [::]:2376"
	Aug 18 19:40:22 NoKubernetes-932889 systemd[1]: Started Docker Application Container Engine.
	Aug 18 19:40:24 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:24.768080874Z" level=info msg="Processing signal 'terminated'"
	Aug 18 19:40:24 NoKubernetes-932889 systemd[1]: Stopping Docker Application Container Engine...
	Aug 18 19:40:24 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:24.769797623Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Aug 18 19:40:24 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:24.770525545Z" level=info msg="Daemon shutdown complete"
	Aug 18 19:40:24 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:24.770674920Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Aug 18 19:40:24 NoKubernetes-932889 dockerd[525]: time="2024-08-18T19:40:24.770710960Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Aug 18 19:40:25 NoKubernetes-932889 systemd[1]: docker.service: Deactivated successfully.
	Aug 18 19:40:25 NoKubernetes-932889 systemd[1]: Stopped Docker Application Container Engine.
	Aug 18 19:40:25 NoKubernetes-932889 systemd[1]: Starting Docker Application Container Engine...
	Aug 18 19:40:25 NoKubernetes-932889 dockerd[840]: time="2024-08-18T19:40:25.826698726Z" level=info msg="Starting up"
	Aug 18 19:41:25 NoKubernetes-932889 dockerd[840]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Aug 18 19:41:25 NoKubernetes-932889 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Aug 18 19:41:25 NoKubernetes-932889 systemd[1]: docker.service: Failed with result 'exit-code'.
	Aug 18 19:41:25 NoKubernetes-932889 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-932889 --no-kubernetes --driver=kvm2 " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-932889 -n NoKubernetes-932889
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-932889 -n NoKubernetes-932889: exit status 6 (261.917047ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0818 19:41:26.180853 1196877 status.go:417] kubeconfig endpoint: get endpoint: "NoKubernetes-932889" does not appear in /home/jenkins/minikube-integration/19423-1145725/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-932889" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/Start (99.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (15.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932889 --driver=kvm2 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-932889 --driver=kvm2 : signal: killed (14.977353714s)

                                                
                                                
-- stdout --
	* [NoKubernetes-932889] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-932889

                                                
                                                
-- /stdout --
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-linux-amd64 start -p NoKubernetes-932889 --driver=kvm2 " : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-932889 -n NoKubernetes-932889
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p NoKubernetes-932889 -n NoKubernetes-932889: exit status 7 (73.177687ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-932889" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (15.05s)

                                                
                                    

Test pass (305/340)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.93
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.13
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.12
12 TestDownloadOnly/v1.31.0/json-events 3.49
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.06
18 TestDownloadOnly/v1.31.0/DeleteAll 0.13
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.61
22 TestOffline 130.03
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 218.01
29 TestAddons/serial/Volcano 42.95
31 TestAddons/serial/GCPAuth/Namespaces 0.12
33 TestAddons/parallel/Registry 17.14
34 TestAddons/parallel/Ingress 21.41
35 TestAddons/parallel/InspektorGadget 11.7
36 TestAddons/parallel/MetricsServer 6.78
37 TestAddons/parallel/HelmTiller 11.7
39 TestAddons/parallel/CSI 54.22
40 TestAddons/parallel/Headlamp 17.38
41 TestAddons/parallel/CloudSpanner 5.69
42 TestAddons/parallel/LocalPath 12.05
43 TestAddons/parallel/NvidiaDevicePlugin 5.54
44 TestAddons/parallel/Yakd 10.99
45 TestAddons/StoppedEnableDisable 13.58
46 TestCertOptions 74.76
47 TestCertExpiration 364.57
48 TestDockerFlags 105.06
49 TestForceSystemdFlag 53.71
50 TestForceSystemdEnv 56.11
52 TestKVMDriverInstallOrUpdate 4
56 TestErrorSpam/setup 49.55
57 TestErrorSpam/start 0.33
58 TestErrorSpam/status 0.73
59 TestErrorSpam/pause 1.2
60 TestErrorSpam/unpause 1.36
61 TestErrorSpam/stop 6.2
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 65.98
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 39.89
68 TestFunctional/serial/KubeContext 0.05
69 TestFunctional/serial/KubectlGetPods 0.07
72 TestFunctional/serial/CacheCmd/cache/add_remote 2.37
73 TestFunctional/serial/CacheCmd/cache/add_local 1.3
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.18
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
81 TestFunctional/serial/ExtraConfig 94.82
83 TestFunctional/serial/LogsCmd 0.92
84 TestFunctional/serial/LogsFileCmd 0.96
85 TestFunctional/serial/InvalidService 3.54
87 TestFunctional/parallel/ConfigCmd 0.36
88 TestFunctional/parallel/DashboardCmd 23.58
89 TestFunctional/parallel/DryRun 0.28
90 TestFunctional/parallel/InternationalLanguage 0.14
91 TestFunctional/parallel/StatusCmd 0.9
95 TestFunctional/parallel/ServiceCmdConnect 7.53
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 61.22
99 TestFunctional/parallel/SSHCmd 0.45
100 TestFunctional/parallel/CpCmd 1.3
101 TestFunctional/parallel/MySQL 40.89
102 TestFunctional/parallel/FileSync 0.23
103 TestFunctional/parallel/CertSync 1.24
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.21
111 TestFunctional/parallel/License 0.17
112 TestFunctional/parallel/ServiceCmd/DeployApp 24.29
113 TestFunctional/parallel/Version/short 0.05
114 TestFunctional/parallel/Version/components 0.53
115 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
119 TestFunctional/parallel/ImageCommands/ImageBuild 3.96
120 TestFunctional/parallel/ImageCommands/Setup 1.58
121 TestFunctional/parallel/DockerEnv/bash 0.78
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.2
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
127 TestFunctional/parallel/ProfileCmd/profile_list 0.32
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
129 TestFunctional/parallel/MountCmd/any-port 8.34
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.77
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.46
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
133 TestFunctional/parallel/ImageCommands/ImageRemove 0.4
134 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.65
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.45
145 TestFunctional/parallel/MountCmd/specific-port 1.93
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.32
147 TestFunctional/parallel/ServiceCmd/List 0.87
148 TestFunctional/parallel/ServiceCmd/JSONOutput 0.83
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
150 TestFunctional/parallel/ServiceCmd/Format 0.31
151 TestFunctional/parallel/ServiceCmd/URL 0.34
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.01
158 TestMultiControlPlane/serial/StartCluster 218.71
159 TestMultiControlPlane/serial/DeployApp 5.39
160 TestMultiControlPlane/serial/PingHostFromPods 1.29
161 TestMultiControlPlane/serial/AddWorkerNode 63.06
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.54
164 TestMultiControlPlane/serial/CopyFile 12.97
165 TestMultiControlPlane/serial/StopSecondaryNode 13.92
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.38
167 TestMultiControlPlane/serial/RestartSecondaryNode 159.67
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.54
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 256.38
170 TestMultiControlPlane/serial/DeleteSecondaryNode 7.16
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.37
172 TestMultiControlPlane/serial/StopCluster 39.08
173 TestMultiControlPlane/serial/RestartCluster 144.27
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.38
175 TestMultiControlPlane/serial/AddSecondaryNode 82.15
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.53
179 TestImageBuild/serial/Setup 49.9
180 TestImageBuild/serial/NormalBuild 1.98
181 TestImageBuild/serial/BuildWithBuildArg 1.27
182 TestImageBuild/serial/BuildWithDockerIgnore 1.01
183 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.77
187 TestJSONOutput/start/Command 65.68
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.56
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.52
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 12.61
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.19
215 TestMainNoArgs 0.04
216 TestMinikubeProfile 100.59
219 TestMountStart/serial/StartWithMountFirst 31.05
220 TestMountStart/serial/VerifyMountFirst 0.38
221 TestMountStart/serial/StartWithMountSecond 30.83
222 TestMountStart/serial/VerifyMountSecond 0.37
223 TestMountStart/serial/DeleteFirst 0.7
224 TestMountStart/serial/VerifyMountPostDelete 0.37
225 TestMountStart/serial/Stop 2.27
226 TestMountStart/serial/RestartStopped 26.32
227 TestMountStart/serial/VerifyMountPostStop 0.37
230 TestMultiNode/serial/FreshStart2Nodes 129.53
231 TestMultiNode/serial/DeployApp2Nodes 3.92
232 TestMultiNode/serial/PingHostFrom2Pods 0.81
233 TestMultiNode/serial/AddNode 59.79
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.22
236 TestMultiNode/serial/CopyFile 7.11
237 TestMultiNode/serial/StopNode 3.27
238 TestMultiNode/serial/StartAfterStop 42.23
239 TestMultiNode/serial/RestartKeepsNodes 190.99
240 TestMultiNode/serial/DeleteNode 2.08
241 TestMultiNode/serial/StopMultiNode 25.23
242 TestMultiNode/serial/RestartMultiNode 119.88
243 TestMultiNode/serial/ValidateNameConflict 62.55
248 TestPreload 304.48
250 TestScheduledStopUnix 122.71
251 TestSkaffold 129.21
254 TestRunningBinaryUpgrade 238.9
256 TestKubernetesUpgrade 203.61
269 TestStoppedBinaryUpgrade/Setup 0.38
270 TestStoppedBinaryUpgrade/Upgrade 135.41
272 TestPause/serial/Start 70.62
273 TestPause/serial/SecondStartNoReconfiguration 63.36
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.14
283 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
284 TestNoKubernetes/serial/StartWithK8s 78.96
285 TestPause/serial/Pause 0.7
286 TestPause/serial/VerifyStatus 0.27
287 TestPause/serial/Unpause 0.61
288 TestPause/serial/PauseAgain 0.66
289 TestPause/serial/DeletePaused 1.06
290 TestPause/serial/VerifyDeletedResources 15.2
291 TestNetworkPlugins/group/auto/Start 62.22
292 TestNetworkPlugins/group/kindnet/Start 101.67
293 TestNoKubernetes/serial/StartWithStopK8s 45.38
294 TestNetworkPlugins/group/calico/Start 109.84
295 TestNetworkPlugins/group/auto/KubeletFlags 0.27
296 TestNetworkPlugins/group/auto/NetCatPod 11.88
298 TestNetworkPlugins/group/auto/DNS 26.66
299 TestNetworkPlugins/group/auto/Localhost 0.14
300 TestNetworkPlugins/group/auto/HairPin 0.15
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.58
304 TestNetworkPlugins/group/custom-flannel/Start 71.68
305 TestNetworkPlugins/group/kindnet/DNS 0.2
306 TestNetworkPlugins/group/kindnet/Localhost 0.16
307 TestNetworkPlugins/group/kindnet/HairPin 0.18
308 TestNetworkPlugins/group/false/Start 100.88
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.23
311 TestNetworkPlugins/group/calico/NetCatPod 11.24
312 TestNetworkPlugins/group/calico/DNS 0.29
313 TestNetworkPlugins/group/calico/Localhost 0.21
314 TestNetworkPlugins/group/calico/HairPin 0.16
315 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
316 TestNoKubernetes/serial/ProfileList 1.18
317 TestNoKubernetes/serial/Stop 59.85
318 TestNetworkPlugins/group/enable-default-cni/Start 100.55
319 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
320 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.22
321 TestNetworkPlugins/group/custom-flannel/DNS 0.19
322 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
323 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
324 TestNetworkPlugins/group/flannel/Start 74.78
326 TestNetworkPlugins/group/false/KubeletFlags 0.2
327 TestNetworkPlugins/group/false/NetCatPod 10.19
328 TestNetworkPlugins/group/bridge/Start 103.06
329 TestNetworkPlugins/group/false/DNS 0.18
330 TestNetworkPlugins/group/false/Localhost 0.14
331 TestNetworkPlugins/group/false/HairPin 0.14
332 TestNetworkPlugins/group/kubenet/Start 108.64
333 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
334 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.25
335 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
336 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
337 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
338 TestNetworkPlugins/group/flannel/ControllerPod 6.01
339 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
340 TestNetworkPlugins/group/flannel/NetCatPod 12.25
342 TestStartStop/group/old-k8s-version/serial/FirstStart 133.96
343 TestNetworkPlugins/group/flannel/DNS 0.18
344 TestNetworkPlugins/group/flannel/Localhost 0.18
345 TestNetworkPlugins/group/flannel/HairPin 0.16
347 TestStartStop/group/no-preload/serial/FirstStart 118.96
348 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
349 TestNetworkPlugins/group/bridge/NetCatPod 11.27
350 TestNetworkPlugins/group/bridge/DNS 0.17
351 TestNetworkPlugins/group/bridge/Localhost 0.14
352 TestNetworkPlugins/group/bridge/HairPin 0.15
353 TestNetworkPlugins/group/kubenet/KubeletFlags 0.34
354 TestNetworkPlugins/group/kubenet/NetCatPod 11.29
356 TestStartStop/group/embed-certs/serial/FirstStart 70.33
357 TestNetworkPlugins/group/kubenet/DNS 0.2
358 TestNetworkPlugins/group/kubenet/Localhost 0.17
359 TestNetworkPlugins/group/kubenet/HairPin 0.17
361 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.27
362 TestStartStop/group/old-k8s-version/serial/DeployApp 8.59
363 TestStartStop/group/no-preload/serial/DeployApp 9.37
364 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.23
365 TestStartStop/group/embed-certs/serial/DeployApp 9.4
366 TestStartStop/group/old-k8s-version/serial/Stop 13.34
367 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
368 TestStartStop/group/no-preload/serial/Stop 13.37
369 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
370 TestStartStop/group/embed-certs/serial/Stop 13.38
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
372 TestStartStop/group/old-k8s-version/serial/SecondStart 404.64
373 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.26
374 TestStartStop/group/no-preload/serial/SecondStart 314.47
375 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
376 TestStartStop/group/embed-certs/serial/SecondStart 334.08
377 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.31
378 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
379 TestStartStop/group/default-k8s-diff-port/serial/Stop 13.38
380 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
381 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 329.37
382 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
384 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
385 TestStartStop/group/no-preload/serial/Pause 2.49
387 TestStartStop/group/newest-cni/serial/FirstStart 55.9
388 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
389 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
390 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
391 TestStartStop/group/embed-certs/serial/Pause 2.6
392 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.21
395 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.54
396 TestStartStop/group/newest-cni/serial/DeployApp 0
397 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.08
398 TestStartStop/group/newest-cni/serial/Stop 13.32
399 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
400 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
401 TestStartStop/group/newest-cni/serial/SecondStart 36.74
402 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
403 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
404 TestStartStop/group/old-k8s-version/serial/Pause 2.45
405 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
406 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
408 TestStartStop/group/newest-cni/serial/Pause 2.18
x
+
TestDownloadOnly/v1.20.0/json-events (12.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-212932 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-212932 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=kvm2 : (12.929515866s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-212932
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-212932: exit status 85 (58.237ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-212932 | jenkins | v1.33.1 | 18 Aug 24 18:37 UTC |          |
	|         | -p download-only-212932        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:37:57
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:37:57.286561 1152912 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:37:57.286767 1152912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:37:57.286828 1152912 out.go:358] Setting ErrFile to fd 2...
	I0818 18:37:57.286847 1152912 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:37:57.287156 1152912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	W0818 18:37:57.287322 1152912 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19423-1145725/.minikube/config/config.json: open /home/jenkins/minikube-integration/19423-1145725/.minikube/config/config.json: no such file or directory
	I0818 18:37:57.287904 1152912 out.go:352] Setting JSON to true
	I0818 18:37:57.288766 1152912 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":98378,"bootTime":1723907899,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:37:57.288825 1152912 start.go:139] virtualization: kvm guest
	I0818 18:37:57.291170 1152912 out.go:97] [download-only-212932] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	W0818 18:37:57.291267 1152912 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19423-1145725/.minikube/cache/preloaded-tarball: no such file or directory
	I0818 18:37:57.291314 1152912 notify.go:220] Checking for updates...
	I0818 18:37:57.292497 1152912 out.go:169] MINIKUBE_LOCATION=19423
	I0818 18:37:57.293808 1152912 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:37:57.295020 1152912 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	I0818 18:37:57.296075 1152912 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	I0818 18:37:57.297220 1152912 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0818 18:37:57.299277 1152912 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0818 18:37:57.299496 1152912 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:37:57.334606 1152912 out.go:97] Using the kvm2 driver based on user configuration
	I0818 18:37:57.334629 1152912 start.go:297] selected driver: kvm2
	I0818 18:37:57.334645 1152912 start.go:901] validating driver "kvm2" against <nil>
	I0818 18:37:57.335077 1152912 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:37:57.335158 1152912 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/19423-1145725/.minikube/bin:/home/jenkins/workspace/KVM_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0818 18:37:57.350402 1152912 install.go:137] /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2 version is 1.33.1
	I0818 18:37:57.350448 1152912 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0818 18:37:57.351076 1152912 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0818 18:37:57.351262 1152912 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0818 18:37:57.351356 1152912 cni.go:84] Creating CNI manager for ""
	I0818 18:37:57.351380 1152912 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0818 18:37:57.351436 1152912 start.go:340] cluster config:
	{Name:download-only-212932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-212932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:37:57.351668 1152912 iso.go:125] acquiring lock: {Name:mkb8cace5317b9fbdd5a745866acff5ebdb0878a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0818 18:37:57.353348 1152912 out.go:97] Downloading VM boot image ...
	I0818 18:37:57.353385 1152912 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso.sha256 -> /home/jenkins/minikube-integration/19423-1145725/.minikube/cache/iso/amd64/minikube-v1.33.1-1723740674-19452-amd64.iso
	I0818 18:38:02.786247 1152912 out.go:97] Starting "download-only-212932" primary control-plane node in "download-only-212932" cluster
	I0818 18:38:02.786271 1152912 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 18:38:02.805849 1152912 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0818 18:38:02.805886 1152912 cache.go:56] Caching tarball of preloaded images
	I0818 18:38:02.806017 1152912 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 18:38:02.807549 1152912 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0818 18:38:02.807570 1152912 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0818 18:38:02.831721 1152912 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/19423-1145725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0818 18:38:05.134631 1152912 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0818 18:38:05.134743 1152912 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19423-1145725/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0818 18:38:05.912684 1152912 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0818 18:38:05.913047 1152912 profile.go:143] Saving config to /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/download-only-212932/config.json ...
	I0818 18:38:05.913089 1152912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/download-only-212932/config.json: {Name:mkdf98bf81b6f5a6fa38789e8a932b431c2c5a93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0818 18:38:05.913302 1152912 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0818 18:38:05.913505 1152912 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/19423-1145725/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-212932 host does not exist
	  To start a cluster, run: "minikube start -p download-only-212932"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-212932
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (3.49s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-929454 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-929454 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=docker --driver=kvm2 : (3.486184897s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (3.49s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-929454
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-929454: exit status 85 (55.800428ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-212932 | jenkins | v1.33.1 | 18 Aug 24 18:37 UTC |                     |
	|         | -p download-only-212932        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| delete  | -p download-only-212932        | download-only-212932 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC | 18 Aug 24 18:38 UTC |
	| start   | -o=json --download-only        | download-only-929454 | jenkins | v1.33.1 | 18 Aug 24 18:38 UTC |                     |
	|         | -p download-only-929454        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/18 18:38:10
	Running on machine: ubuntu-20-agent-3
	Binary: Built with gc go1.22.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0818 18:38:10.525126 1153140 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:38:10.525250 1153140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:10.525259 1153140 out.go:358] Setting ErrFile to fd 2...
	I0818 18:38:10.525263 1153140 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:38:10.525437 1153140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	I0818 18:38:10.525978 1153140 out.go:352] Setting JSON to true
	I0818 18:38:10.526899 1153140 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":98391,"bootTime":1723907899,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:38:10.526956 1153140 start.go:139] virtualization: kvm guest
	I0818 18:38:10.528893 1153140 out.go:97] [download-only-929454] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:38:10.529051 1153140 notify.go:220] Checking for updates...
	I0818 18:38:10.530249 1153140 out.go:169] MINIKUBE_LOCATION=19423
	I0818 18:38:10.531504 1153140 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:38:10.532658 1153140 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	I0818 18:38:10.533778 1153140 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	I0818 18:38:10.534972 1153140 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-929454 host does not exist
	  To start a cluster, run: "minikube start -p download-only-929454"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-929454
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-076219 --alsologtostderr --binary-mirror http://127.0.0.1:35519 --driver=kvm2 
helpers_test.go:175: Cleaning up "binary-mirror-076219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-076219
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestOffline (130.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-726156 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-726156 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2 : (2m8.988566581s)
helpers_test.go:175: Cleaning up "offline-docker-726156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-726156
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-726156: (1.037857058s)
--- PASS: TestOffline (130.03s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-058019
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-058019: exit status 85 (47.705213ms)

                                                
                                                
-- stdout --
	* Profile "addons-058019" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-058019"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-058019
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-058019: exit status 85 (49.163989ms)

                                                
                                                
-- stdout --
	* Profile "addons-058019" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-058019"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (218.01s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-058019 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-058019 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=kvm2  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m38.007753995s)
--- PASS: TestAddons/Setup (218.01s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.95s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 16.965578ms
addons_test.go:905: volcano-admission stabilized in 17.032806ms
addons_test.go:897: volcano-scheduler stabilized in 17.071638ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-vjl4r" [f1abec02-2498-4943-960d-8d5ac71d38a7] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004615729s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-5987c" [89e892fa-59ba-44d3-b709-8c3c84aa8472] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004451489s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-mzfsk" [75267e4d-d2d5-4075-b262-0cb308675ca4] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003768794s
addons_test.go:932: (dbg) Run:  kubectl --context addons-058019 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-058019 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-058019 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [63b8277e-02b1-4133-ae9d-5dfa1b3dbc0a] Pending
helpers_test.go:344: "test-job-nginx-0" [63b8277e-02b1-4133-ae9d-5dfa1b3dbc0a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [63b8277e-02b1-4133-ae9d-5dfa1b3dbc0a] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 16.003385173s
addons_test.go:968: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-amd64 -p addons-058019 addons disable volcano --alsologtostderr -v=1: (10.558307604s)
--- PASS: TestAddons/serial/Volcano (42.95s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-058019 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-058019 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 5.389232ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-hbswk" [598a3f23-f8aa-4e5d-bc43-07150c184373] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003632771s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-dp4kh" [40dd7fae-6530-4a6b-abd5-474552f9f987] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004098354s
addons_test.go:342: (dbg) Run:  kubectl --context addons-058019 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-058019 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-058019 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.417515564s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 ip
2024/08/18 18:43:11 [DEBUG] GET http://192.168.39.183:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.14s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-058019 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-058019 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-058019 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0a4a2f99-58d4-40e3-87f1-e308db6c0ec8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0a4a2f99-58d4-40e3-87f1-e308db6c0ec8] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.005075441s
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-058019 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.39.183
addons_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-amd64 -p addons-058019 addons disable ingress-dns --alsologtostderr -v=1: (1.434532668s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p addons-058019 addons disable ingress --alsologtostderr -v=1: (7.699708736s)
--- PASS: TestAddons/parallel/Ingress (21.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lrjlt" [71b2350f-6223-4d0d-a8b1-a9d426606f57] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004874463s
addons_test.go:851: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-058019
addons_test.go:851: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-058019: (5.698474072s)
--- PASS: TestAddons/parallel/InspektorGadget (11.70s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.39652ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-t8kfh" [f47dfef5-4c42-4347-a6d4-551235cf08ad] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003661491s
addons_test.go:417: (dbg) Run:  kubectl --context addons-058019 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.78s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.7s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 6.290333ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-b48cc5f79-lj5mf" [2c929f44-2c8c-4b84-8c5d-fd2b22012444] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.003850538s
addons_test.go:475: (dbg) Run:  kubectl --context addons-058019 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-058019 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.172292917s)
addons_test.go:492: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.22s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 12.851227ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-058019 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-058019 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [caa5dfd8-833a-4716-b4e0-d8254ac0c287] Pending
helpers_test.go:344: "task-pv-pod" [caa5dfd8-833a-4716-b4e0-d8254ac0c287] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [caa5dfd8-833a-4716-b4e0-d8254ac0c287] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.006355118s
addons_test.go:590: (dbg) Run:  kubectl --context addons-058019 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-058019 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-058019 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-058019 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-058019 delete pod task-pv-pod: (1.074553264s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-058019 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-058019 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-058019 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b6aca4de-a14b-40ea-98e7-1e0832327b24] Pending
helpers_test.go:344: "task-pv-pod-restore" [b6aca4de-a14b-40ea-98e7-1e0832327b24] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b6aca4de-a14b-40ea-98e7-1e0832327b24] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.004168261s
addons_test.go:632: (dbg) Run:  kubectl --context addons-058019 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-058019 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-058019 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-amd64 -p addons-058019 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.688411655s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (54.22s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-058019 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-vbt9j" [5b82b8fc-ca2e-4838-a47a-5a95a8bc4c54] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-vbt9j" [5b82b8fc-ca2e-4838-a47a-5a95a8bc4c54] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-vbt9j" [5b82b8fc-ca2e-4838-a47a-5a95a8bc4c54] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.006647839s
addons_test.go:839: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-amd64 -p addons-058019 addons disable headlamp --alsologtostderr -v=1: (5.600613168s)
--- PASS: TestAddons/parallel/Headlamp (17.38s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-7thd2" [6eeb911d-24b5-42e0-b74d-bbb8ebe382d4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006472895s
addons_test.go:870: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-058019
--- PASS: TestAddons/parallel/CloudSpanner (5.69s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (12.05s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-058019 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-058019 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-058019 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [cf23fbdf-8a1f-46c3-b6b6-fd6af2d86acf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [cf23fbdf-8a1f-46c3-b6b6-fd6af2d86acf] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [cf23fbdf-8a1f-46c3-b6b6-fd6af2d86acf] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004639584s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-058019 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 ssh "cat /opt/local-path-provisioner/pvc-7259e7ea-987c-4f1b-a5e1-b2977cde4c5d_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-058019 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-058019 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (12.05s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-f7tmg" [e86b96b4-841f-4954-96b0-9f5e441cb2d7] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005118484s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-058019
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.54s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zwmlx" [0ffd0b3a-10d3-4d5c-bd8f-d6ab5fcd7c6b] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003623527s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-amd64 -p addons-058019 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-amd64 -p addons-058019 addons disable yakd --alsologtostderr -v=1: (5.983220557s)
--- PASS: TestAddons/parallel/Yakd (10.99s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (13.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-058019
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-058019: (13.312085727s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-058019
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-058019
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-058019
--- PASS: TestAddons/StoppedEnableDisable (13.58s)

                                                
                                    
x
+
TestCertOptions (74.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-138877 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-138877 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2 : (1m13.45868544s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-138877 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-138877 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-138877 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-138877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-138877
--- PASS: TestCertOptions (74.76s)

                                                
                                    
x
+
TestCertExpiration (364.57s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-852824 --memory=2048 --cert-expiration=3m --driver=kvm2 
E0818 19:33:36.718323 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-852824 --memory=2048 --cert-expiration=3m --driver=kvm2 : (1m41.146813263s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-852824 --memory=2048 --cert-expiration=8760h --driver=kvm2 
E0818 19:38:18.467148 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-852824 --memory=2048 --cert-expiration=8760h --driver=kvm2 : (1m22.348182457s)
helpers_test.go:175: Cleaning up "cert-expiration-852824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-852824
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-852824: (1.076800899s)
--- PASS: TestCertExpiration (364.57s)

                                                
                                    
x
+
TestDockerFlags (105.06s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-260273 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-260273 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=kvm2 : (1m43.168030826s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-260273 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-260273 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-260273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-260273
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-260273: (1.418290131s)
--- PASS: TestDockerFlags (105.06s)

                                                
                                    
x
+
TestForceSystemdFlag (53.71s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-815620 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-815620 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2 : (52.640320673s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-815620 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-815620" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-815620
--- PASS: TestForceSystemdFlag (53.71s)

                                                
                                    
x
+
TestForceSystemdEnv (56.11s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-171009 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-171009 --memory=2048 --alsologtostderr -v=5 --driver=kvm2 : (54.829771333s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-171009 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-171009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-171009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-171009: (1.023858497s)
--- PASS: TestForceSystemdEnv (56.11s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.00s)

                                                
                                    
x
+
TestErrorSpam/setup (49.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-290448 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-290448 --driver=kvm2 
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-290448 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-290448 --driver=kvm2 : (49.552740565s)
--- PASS: TestErrorSpam/setup (49.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.33s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 start --dry-run
--- PASS: TestErrorSpam/start (0.33s)

                                                
                                    
x
+
TestErrorSpam/status (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 status
--- PASS: TestErrorSpam/status (0.73s)

                                                
                                    
x
+
TestErrorSpam/pause (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 pause
--- PASS: TestErrorSpam/pause (1.20s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (6.2s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 stop: (3.28048734s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 stop: (1.612978208s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-290448 --log_dir /tmp/nospam-290448 stop: (1.310854996s)
--- PASS: TestErrorSpam/stop (6.20s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19423-1145725/.minikube/files/etc/test/nested/copy/1152900/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.98s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-771033 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-771033 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2 : (1m5.976707332s)
--- PASS: TestFunctional/serial/StartWithProxy (65.98s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-771033 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-771033 --alsologtostderr -v=8: (39.889295226s)
functional_test.go:663: soft start took 39.889925165s for "functional-771033" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-771033 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.37s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.37s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-771033 /tmp/TestFunctionalserialCacheCmdcacheadd_local1520695071/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cache add minikube-local-cache-test:functional-771033
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cache delete minikube-local-cache-test:functional-771033
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-771033
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh sudo docker rmi registry.k8s.io/pause:latest
E0818 18:46:53.183288 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:46:53.190204 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:46:53.201566 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:46:53.223056 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:46:53.264510 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E0818 18:46:53.346472 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:46:53.508354 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (212.991141ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cache reload
E0818 18:46:53.829744 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 kubectl -- --context functional-771033 get pods
E0818 18:46:54.471821 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-771033 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (94.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-771033 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0818 18:46:55.753717 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:46:58.315360 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:47:03.436718 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:47:13.678207 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:47:34.159578 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:48:15.121703 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-771033 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m34.822712646s)
functional_test.go:761: restart took 1m34.822851067s for "functional-771033" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (94.82s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 logs
--- PASS: TestFunctional/serial/LogsCmd (0.92s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 logs --file /tmp/TestFunctionalserialLogsFileCmd3262617838/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.54s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-771033 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-771033
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-771033: exit status 115 (289.001138ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.95:32765 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-771033 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 config get cpus: exit status 14 (66.923562ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 config get cpus: exit status 14 (48.292599ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (23.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-771033 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-771033 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1160886: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (23.58s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-771033 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-771033 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (140.569416ms)

                                                
                                                
-- stdout --
	* [functional-771033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:48:39.615044 1160314 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:48:39.615169 1160314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:48:39.615178 1160314 out.go:358] Setting ErrFile to fd 2...
	I0818 18:48:39.615182 1160314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:48:39.615353 1160314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	I0818 18:48:39.615851 1160314 out.go:352] Setting JSON to false
	I0818 18:48:39.616892 1160314 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":99021,"bootTime":1723907899,"procs":265,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:48:39.616954 1160314 start.go:139] virtualization: kvm guest
	I0818 18:48:39.619011 1160314 out.go:177] * [functional-771033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0818 18:48:39.620248 1160314 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:48:39.620294 1160314 notify.go:220] Checking for updates...
	I0818 18:48:39.622482 1160314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:48:39.623776 1160314 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	I0818 18:48:39.624893 1160314 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	I0818 18:48:39.626195 1160314 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:48:39.627404 1160314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:48:39.629039 1160314 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 18:48:39.629681 1160314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:48:39.629774 1160314 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:48:39.646450 1160314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44655
	I0818 18:48:39.646927 1160314 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:48:39.647516 1160314 main.go:141] libmachine: Using API Version  1
	I0818 18:48:39.647547 1160314 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:48:39.647943 1160314 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:48:39.648207 1160314 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:48:39.648501 1160314 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:48:39.648831 1160314 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:48:39.648870 1160314 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:48:39.664603 1160314 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I0818 18:48:39.665023 1160314 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:48:39.665514 1160314 main.go:141] libmachine: Using API Version  1
	I0818 18:48:39.665538 1160314 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:48:39.665899 1160314 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:48:39.666108 1160314 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:48:39.702005 1160314 out.go:177] * Using the kvm2 driver based on existing profile
	I0818 18:48:39.702991 1160314 start.go:297] selected driver: kvm2
	I0818 18:48:39.703014 1160314 start.go:901] validating driver "kvm2" against &{Name:functional-771033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-771033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:48:39.703150 1160314 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:48:39.704966 1160314 out.go:201] 
	W0818 18:48:39.706084 1160314 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0818 18:48:39.707169 1160314 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-771033 --dry-run --alsologtostderr -v=1 --driver=kvm2 
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-771033 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-771033 --dry-run --memory 250MB --alsologtostderr --driver=kvm2 : exit status 23 (138.396294ms)

                                                
                                                
-- stdout --
	* [functional-771033] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:48:39.898220 1160392 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:48:39.898343 1160392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:48:39.898353 1160392 out.go:358] Setting ErrFile to fd 2...
	I0818 18:48:39.898358 1160392 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:48:39.898598 1160392 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	I0818 18:48:39.899150 1160392 out.go:352] Setting JSON to false
	I0818 18:48:39.900274 1160392 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-3","uptime":99021,"bootTime":1723907899,"procs":264,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1066-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0818 18:48:39.900345 1160392 start.go:139] virtualization: kvm guest
	I0818 18:48:39.902247 1160392 out.go:177] * [functional-771033] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0818 18:48:39.903548 1160392 out.go:177]   - MINIKUBE_LOCATION=19423
	I0818 18:48:39.903604 1160392 notify.go:220] Checking for updates...
	I0818 18:48:39.905747 1160392 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0818 18:48:39.907081 1160392 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	I0818 18:48:39.908140 1160392 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	I0818 18:48:39.909212 1160392 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0818 18:48:39.910288 1160392 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0818 18:48:39.911754 1160392 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 18:48:39.912171 1160392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:48:39.912236 1160392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:48:39.928483 1160392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I0818 18:48:39.928966 1160392 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:48:39.929576 1160392 main.go:141] libmachine: Using API Version  1
	I0818 18:48:39.929598 1160392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:48:39.929988 1160392 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:48:39.930221 1160392 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:48:39.930529 1160392 driver.go:394] Setting default libvirt URI to qemu:///system
	I0818 18:48:39.930866 1160392 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:48:39.930906 1160392 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:48:39.946589 1160392 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44579
	I0818 18:48:39.947028 1160392 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:48:39.947503 1160392 main.go:141] libmachine: Using API Version  1
	I0818 18:48:39.947533 1160392 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:48:39.947883 1160392 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:48:39.948071 1160392 main.go:141] libmachine: (functional-771033) Calling .DriverName
	I0818 18:48:39.982390 1160392 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0818 18:48:39.983464 1160392 start.go:297] selected driver: kvm2
	I0818 18:48:39.983476 1160392 start.go:901] validating driver "kvm2" against &{Name:functional-771033 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19452/minikube-v1.33.1-1723740674-19452-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1723740748-19452@sha256:2211a6931895d2d502e957e9667096db10734a96767d670cb4dbffdd37397b0d Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.0 ClusterName:functional-771033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.95 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0818 18:48:39.983600 1160392 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0818 18:48:39.985587 1160392 out.go:201] 
	W0818 18:48:39.986689 1160392 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0818 18:48:39.987909 1160392 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-771033 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-771033 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9s4ks" [51a56199-603d-4650-8d70-e9ccaab5d1be] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-9s4ks" [51a56199-603d-4650-8d70-e9ccaab5d1be] Running
2024/08/18 18:49:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004886719s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.39.95:31100
functional_test.go:1675: http://192.168.39.95:31100: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-9s4ks

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.95:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.95:31100
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.53s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (61.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [01c74b7d-d168-47d2-8415-af0dcd45453e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005460459s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-771033 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-771033 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-771033 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-771033 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-771033 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-771033 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-771033 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-771033 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-771033 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d445a51a-4453-46ea-81f9-633c6d4b486e] Pending
helpers_test.go:344: "sp-pod" [d445a51a-4453-46ea-81f9-633c6d4b486e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d445a51a-4453-46ea-81f9-633c6d4b486e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 18.003921233s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-771033 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-771033 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-771033 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [09d6c8b3-2be3-42bf-b374-be9d3d186a09] Pending
helpers_test.go:344: "sp-pod" [09d6c8b3-2be3-42bf-b374-be9d3d186a09] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [09d6c8b3-2be3-42bf-b374-be9d3d186a09] Running
E0818 18:49:37.043421 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004224197s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-771033 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (61.22s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh -n functional-771033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cp functional-771033:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3909575738/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh -n functional-771033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh -n functional-771033 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (40.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-771033 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-wlwhf" [cab3e1ef-594e-4379-bcbd-702d6492834a] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-wlwhf" [cab3e1ef-594e-4379-bcbd-702d6492834a] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 36.004331177s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-771033 exec mysql-6cdb49bbb-wlwhf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-771033 exec mysql-6cdb49bbb-wlwhf -- mysql -ppassword -e "show databases;": exit status 1 (175.374229ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-771033 exec mysql-6cdb49bbb-wlwhf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-771033 exec mysql-6cdb49bbb-wlwhf -- mysql -ppassword -e "show databases;": exit status 1 (136.424384ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-771033 exec mysql-6cdb49bbb-wlwhf -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-771033 exec mysql-6cdb49bbb-wlwhf -- mysql -ppassword -e "show databases;": exit status 1 (130.912228ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1807: (dbg) Run:  kubectl --context functional-771033 exec mysql-6cdb49bbb-wlwhf -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (40.89s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1152900/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo cat /etc/test/nested/copy/1152900/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1152900.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo cat /etc/ssl/certs/1152900.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1152900.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo cat /usr/share/ca-certificates/1152900.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11529002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo cat /etc/ssl/certs/11529002.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11529002.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo cat /usr/share/ca-certificates/11529002.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-771033 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 ssh "sudo systemctl is-active crio": exit status 1 (207.041097ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (24.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-771033 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-771033 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-tdhx7" [c3a3b3a5-1bb9-425a-bd0a-a590dddf2920] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
helpers_test.go:344: "hello-node-6b9f76b5c7-tdhx7" [c3a3b3a5-1bb9-425a-bd0a-a590dddf2920] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-tdhx7" [c3a3b3a5-1bb9-425a-bd0a-a590dddf2920] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 24.021323122s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (24.29s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-771033 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-771033
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-771033
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-771033 image ls --format short --alsologtostderr:
I0818 18:49:08.417433 1161835 out.go:345] Setting OutFile to fd 1 ...
I0818 18:49:08.417612 1161835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:08.417627 1161835 out.go:358] Setting ErrFile to fd 2...
I0818 18:49:08.417633 1161835 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:08.417939 1161835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
I0818 18:49:08.418789 1161835 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:08.418979 1161835 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:08.419673 1161835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:08.419738 1161835 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:08.434967 1161835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38985
I0818 18:49:08.435502 1161835 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:08.436069 1161835 main.go:141] libmachine: Using API Version  1
I0818 18:49:08.436113 1161835 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:08.436498 1161835 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:08.436684 1161835 main.go:141] libmachine: (functional-771033) Calling .GetState
I0818 18:49:08.438548 1161835 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:08.438583 1161835 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:08.453931 1161835 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35943
I0818 18:49:08.454468 1161835 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:08.455019 1161835 main.go:141] libmachine: Using API Version  1
I0818 18:49:08.455051 1161835 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:08.455431 1161835 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:08.455632 1161835 main.go:141] libmachine: (functional-771033) Calling .DriverName
I0818 18:49:08.455906 1161835 ssh_runner.go:195] Run: systemctl --version
I0818 18:49:08.455942 1161835 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
I0818 18:49:08.458685 1161835 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:08.459088 1161835 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
I0818 18:49:08.459124 1161835 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:08.459373 1161835 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
I0818 18:49:08.459539 1161835 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
I0818 18:49:08.459713 1161835 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
I0818 18:49:08.459862 1161835 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
I0818 18:49:08.544801 1161835 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0818 18:49:08.571020 1161835 main.go:141] libmachine: Making call to close driver server
I0818 18:49:08.571035 1161835 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:08.571329 1161835 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:08.571357 1161835 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
I0818 18:49:08.571359 1161835 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:49:08.571374 1161835 main.go:141] libmachine: Making call to close driver server
I0818 18:49:08.571383 1161835 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:08.571648 1161835 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:08.571753 1161835 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:49:08.571750 1161835 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-771033 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.31.0           | 045733566833c | 88.4MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kicbase/echo-server               | functional-771033 | 9056ab77afb8e | 4.94MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-771033 | fb89f4d92bc28 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.31.0           | 1766f54c897f0 | 67.4MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/kube-apiserver              | v1.31.0           | 604f5db92eaa8 | 94.2MB |
| registry.k8s.io/kube-proxy                  | v1.31.0           | ad83b2ca7b09e | 91.5MB |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 2e96e5913fc06 | 148MB  |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-771033 image ls --format table --alsologtostderr:
I0818 18:49:11.181985 1162014 out.go:345] Setting OutFile to fd 1 ...
I0818 18:49:11.182137 1162014 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:11.182153 1162014 out.go:358] Setting ErrFile to fd 2...
I0818 18:49:11.182160 1162014 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:11.182436 1162014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
I0818 18:49:11.183216 1162014 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:11.183393 1162014 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:11.183870 1162014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:11.183936 1162014 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:11.202015 1162014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41467
I0818 18:49:11.202554 1162014 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:11.203227 1162014 main.go:141] libmachine: Using API Version  1
I0818 18:49:11.203253 1162014 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:11.203677 1162014 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:11.203937 1162014 main.go:141] libmachine: (functional-771033) Calling .GetState
I0818 18:49:11.205979 1162014 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:11.206022 1162014 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:11.222230 1162014 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38257
I0818 18:49:11.222790 1162014 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:11.223503 1162014 main.go:141] libmachine: Using API Version  1
I0818 18:49:11.223537 1162014 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:11.223925 1162014 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:11.224154 1162014 main.go:141] libmachine: (functional-771033) Calling .DriverName
I0818 18:49:11.224394 1162014 ssh_runner.go:195] Run: systemctl --version
I0818 18:49:11.224423 1162014 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
I0818 18:49:11.227613 1162014 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:11.228060 1162014 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
I0818 18:49:11.228092 1162014 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:11.228256 1162014 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
I0818 18:49:11.228407 1162014 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
I0818 18:49:11.228581 1162014 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
I0818 18:49:11.228763 1162014 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
I0818 18:49:11.317090 1162014 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0818 18:49:11.346250 1162014 main.go:141] libmachine: Making call to close driver server
I0818 18:49:11.346267 1162014 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:11.346613 1162014 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
I0818 18:49:11.346655 1162014 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:11.346666 1162014 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:49:11.346677 1162014 main.go:141] libmachine: Making call to close driver server
I0818 18:49:11.346685 1162014 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:11.346916 1162014 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:11.346931 1162014 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-771033 image ls --format json --alsologtostderr:
[{"id":"604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"94200000"},{"id":"1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"67400000"},{"id":"ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"91500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"fb89f4d92bc28cff6e36fad0268136e6612d5d2bc983eceda21edd05d44d2f9f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-771033"],"size":"30"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"07655ddf2eebe5d250f7a72c25f638b
27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"148000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30"
,"repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-771033"],"size":"4940000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"88400000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-771033 image ls --format json --alsologtostderr:
I0818 18:49:10.950853 1161990 out.go:345] Setting OutFile to fd 1 ...
I0818 18:49:10.951100 1161990 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:10.951109 1161990 out.go:358] Setting ErrFile to fd 2...
I0818 18:49:10.951113 1161990 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:10.951317 1161990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
I0818 18:49:10.951903 1161990 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:10.952005 1161990 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:10.952396 1161990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:10.952444 1161990 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:10.967935 1161990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44455
I0818 18:49:10.968387 1161990 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:10.969054 1161990 main.go:141] libmachine: Using API Version  1
I0818 18:49:10.969085 1161990 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:10.969484 1161990 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:10.969704 1161990 main.go:141] libmachine: (functional-771033) Calling .GetState
I0818 18:49:10.971672 1161990 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:10.971726 1161990 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:10.986787 1161990 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
I0818 18:49:10.987266 1161990 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:10.987830 1161990 main.go:141] libmachine: Using API Version  1
I0818 18:49:10.987857 1161990 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:10.988175 1161990 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:10.988362 1161990 main.go:141] libmachine: (functional-771033) Calling .DriverName
I0818 18:49:10.988575 1161990 ssh_runner.go:195] Run: systemctl --version
I0818 18:49:10.988607 1161990 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
I0818 18:49:10.991740 1161990 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:10.992176 1161990 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
I0818 18:49:10.992216 1161990 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:10.992352 1161990 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
I0818 18:49:10.992530 1161990 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
I0818 18:49:10.992709 1161990 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
I0818 18:49:10.992854 1161990 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
I0818 18:49:11.077091 1161990 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0818 18:49:11.125765 1161990 main.go:141] libmachine: Making call to close driver server
I0818 18:49:11.125793 1161990 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:11.126099 1161990 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:11.126121 1161990 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:49:11.126131 1161990 main.go:141] libmachine: Making call to close driver server
I0818 18:49:11.126137 1161990 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
I0818 18:49:11.126140 1161990 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:11.126507 1161990 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:11.126527 1161990 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
I0818 18:49:11.126541 1161990 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-771033 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 1766f54c897f0e57040741e6741462f2e3a7d754705f446c9f729c7e1230fb94
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "67400000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-771033
size: "4940000"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "148000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: fb89f4d92bc28cff6e36fad0268136e6612d5d2bc983eceda21edd05d44d2f9f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-771033
size: "30"
- id: ad83b2ca7b09e6162f96f933eecded731cbebf049c78f941fd0ce560a86b6494
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "91500000"
- id: 604f5db92eaa823d11c141d8825f1460206f6bf29babca2a909a698dc22055d3
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "94200000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 045733566833c40b15806c9b87d27f08e455e069833752e0e6ad7a76d37cb2b1
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "88400000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-771033 image ls --format yaml --alsologtostderr:
I0818 18:49:08.625858 1161858 out.go:345] Setting OutFile to fd 1 ...
I0818 18:49:08.625975 1161858 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:08.625984 1161858 out.go:358] Setting ErrFile to fd 2...
I0818 18:49:08.625990 1161858 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:08.626168 1161858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
I0818 18:49:08.626728 1161858 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:08.626829 1161858 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:08.627208 1161858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:08.627257 1161858 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:08.642108 1161858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40171
I0818 18:49:08.642619 1161858 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:08.643293 1161858 main.go:141] libmachine: Using API Version  1
I0818 18:49:08.643323 1161858 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:08.643753 1161858 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:08.644001 1161858 main.go:141] libmachine: (functional-771033) Calling .GetState
I0818 18:49:08.645978 1161858 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:08.646021 1161858 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:08.661099 1161858 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45553
I0818 18:49:08.661516 1161858 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:08.662004 1161858 main.go:141] libmachine: Using API Version  1
I0818 18:49:08.662025 1161858 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:08.662391 1161858 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:08.662590 1161858 main.go:141] libmachine: (functional-771033) Calling .DriverName
I0818 18:49:08.662808 1161858 ssh_runner.go:195] Run: systemctl --version
I0818 18:49:08.662843 1161858 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
I0818 18:49:08.665430 1161858 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:08.665852 1161858 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
I0818 18:49:08.665877 1161858 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:08.666037 1161858 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
I0818 18:49:08.666201 1161858 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
I0818 18:49:08.666365 1161858 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
I0818 18:49:08.666524 1161858 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
I0818 18:49:08.752090 1161858 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0818 18:49:08.781189 1161858 main.go:141] libmachine: Making call to close driver server
I0818 18:49:08.781239 1161858 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:08.781565 1161858 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:08.781589 1161858 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:49:08.781599 1161858 main.go:141] libmachine: Making call to close driver server
I0818 18:49:08.781610 1161858 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:08.781839 1161858 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:08.781856 1161858 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 ssh pgrep buildkitd: exit status 1 (194.050786ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image build -t localhost/my-image:functional-771033 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-771033 image build -t localhost/my-image:functional-771033 testdata/build --alsologtostderr: (3.554909145s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-771033 image build -t localhost/my-image:functional-771033 testdata/build --alsologtostderr:
I0818 18:49:09.034884 1161914 out.go:345] Setting OutFile to fd 1 ...
I0818 18:49:09.035227 1161914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:09.035241 1161914 out.go:358] Setting ErrFile to fd 2...
I0818 18:49:09.035248 1161914 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0818 18:49:09.035536 1161914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
I0818 18:49:09.036347 1161914 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:09.037324 1161914 config.go:182] Loaded profile config "functional-771033": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
I0818 18:49:09.037916 1161914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:09.038000 1161914 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:09.053226 1161914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39209
I0818 18:49:09.053746 1161914 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:09.054413 1161914 main.go:141] libmachine: Using API Version  1
I0818 18:49:09.054440 1161914 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:09.054819 1161914 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:09.055041 1161914 main.go:141] libmachine: (functional-771033) Calling .GetState
I0818 18:49:09.056782 1161914 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
I0818 18:49:09.056818 1161914 main.go:141] libmachine: Launching plugin server for driver kvm2
I0818 18:49:09.071972 1161914 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34939
I0818 18:49:09.072427 1161914 main.go:141] libmachine: () Calling .GetVersion
I0818 18:49:09.073088 1161914 main.go:141] libmachine: Using API Version  1
I0818 18:49:09.073120 1161914 main.go:141] libmachine: () Calling .SetConfigRaw
I0818 18:49:09.073471 1161914 main.go:141] libmachine: () Calling .GetMachineName
I0818 18:49:09.073706 1161914 main.go:141] libmachine: (functional-771033) Calling .DriverName
I0818 18:49:09.073920 1161914 ssh_runner.go:195] Run: systemctl --version
I0818 18:49:09.073945 1161914 main.go:141] libmachine: (functional-771033) Calling .GetSSHHostname
I0818 18:49:09.077162 1161914 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:09.077568 1161914 main.go:141] libmachine: (functional-771033) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c6:04", ip: ""} in network mk-functional-771033: {Iface:virbr1 ExpiryTime:2024-08-18 19:45:17 +0000 UTC Type:0 Mac:52:54:00:39:c6:04 Iaid: IPaddr:192.168.39.95 Prefix:24 Hostname:functional-771033 Clientid:01:52:54:00:39:c6:04}
I0818 18:49:09.077600 1161914 main.go:141] libmachine: (functional-771033) DBG | domain functional-771033 has defined IP address 192.168.39.95 and MAC address 52:54:00:39:c6:04 in network mk-functional-771033
I0818 18:49:09.077766 1161914 main.go:141] libmachine: (functional-771033) Calling .GetSSHPort
I0818 18:49:09.077954 1161914 main.go:141] libmachine: (functional-771033) Calling .GetSSHKeyPath
I0818 18:49:09.078124 1161914 main.go:141] libmachine: (functional-771033) Calling .GetSSHUsername
I0818 18:49:09.078271 1161914 sshutil.go:53] new ssh client: &{IP:192.168.39.95 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/functional-771033/id_rsa Username:docker}
I0818 18:49:09.164217 1161914 build_images.go:161] Building image from path: /tmp/build.1510305363.tar
I0818 18:49:09.164299 1161914 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0818 18:49:09.176707 1161914 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1510305363.tar
I0818 18:49:09.181349 1161914 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1510305363.tar: stat -c "%s %y" /var/lib/minikube/build/build.1510305363.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1510305363.tar': No such file or directory
I0818 18:49:09.181376 1161914 ssh_runner.go:362] scp /tmp/build.1510305363.tar --> /var/lib/minikube/build/build.1510305363.tar (3072 bytes)
I0818 18:49:09.208126 1161914 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1510305363
I0818 18:49:09.218422 1161914 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1510305363 -xf /var/lib/minikube/build/build.1510305363.tar
I0818 18:49:09.228891 1161914 docker.go:360] Building image: /var/lib/minikube/build/build.1510305363
I0818 18:49:09.228963 1161914 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-771033 /var/lib/minikube/build/build.1510305363
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:4864137b9a4c7d475aff5f618e57ba50ec8c180d760b0ed38f30dcc17a23621b
#8 writing image sha256:4864137b9a4c7d475aff5f618e57ba50ec8c180d760b0ed38f30dcc17a23621b done
#8 naming to localhost/my-image:functional-771033 done
#8 DONE 0.1s
I0818 18:49:12.503188 1161914 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-771033 /var/lib/minikube/build/build.1510305363: (3.274190515s)
I0818 18:49:12.503273 1161914 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1510305363
I0818 18:49:12.515450 1161914 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1510305363.tar
I0818 18:49:12.528812 1161914 build_images.go:217] Built localhost/my-image:functional-771033 from /tmp/build.1510305363.tar
I0818 18:49:12.528846 1161914 build_images.go:133] succeeded building to: functional-771033
I0818 18:49:12.528853 1161914 build_images.go:134] failed building to: 
I0818 18:49:12.528878 1161914 main.go:141] libmachine: Making call to close driver server
I0818 18:49:12.528892 1161914 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:12.529284 1161914 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:12.529322 1161914 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
I0818 18:49:12.529342 1161914 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:49:12.529353 1161914 main.go:141] libmachine: Making call to close driver server
I0818 18:49:12.529361 1161914 main.go:141] libmachine: (functional-771033) Calling .Close
I0818 18:49:12.529611 1161914 main.go:141] libmachine: Successfully made call to close driver server
I0818 18:49:12.529649 1161914 main.go:141] libmachine: Making call to close connection to plugin binary
I0818 18:49:12.529650 1161914 main.go:141] libmachine: (functional-771033) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.556433329s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-771033
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-771033 docker-env) && out/minikube-linux-amd64 status -p functional-771033"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-771033 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image load --daemon kicbase/echo-server:functional-771033 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "265.710244ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "51.372157ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "243.045359ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "62.826935ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdany-port2404094085/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724006919422293683" to /tmp/TestFunctionalparallelMountCmdany-port2404094085/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724006919422293683" to /tmp/TestFunctionalparallelMountCmdany-port2404094085/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724006919422293683" to /tmp/TestFunctionalparallelMountCmdany-port2404094085/001/test-1724006919422293683
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (200.044127ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 18 18:48 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 18 18:48 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 18 18:48 test-1724006919422293683
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh cat /mount-9p/test-1724006919422293683
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-771033 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [55f8d802-1c69-4ad9-83eb-58fe27b8827b] Pending
helpers_test.go:344: "busybox-mount" [55f8d802-1c69-4ad9-83eb-58fe27b8827b] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [55f8d802-1c69-4ad9-83eb-58fe27b8827b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [55f8d802-1c69-4ad9-83eb-58fe27b8827b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003842851s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-771033 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdany-port2404094085/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image load --daemon kicbase/echo-server:functional-771033 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-771033
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image load --daemon kicbase/echo-server:functional-771033 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image save kicbase/echo-server:functional-771033 /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image rm kicbase/echo-server:functional-771033 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image load /home/jenkins/workspace/KVM_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-771033
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 image save --daemon kicbase/echo-server:functional-771033 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-771033
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdspecific-port3269157901/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (201.52417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdspecific-port3269157901/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 ssh "sudo umount -f /mount-9p": exit status 1 (189.017192ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-771033 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdspecific-port3269157901/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1110543233/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1110543233/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1110543233/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T" /mount1: exit status 1 (259.621201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-771033 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1110543233/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1110543233/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-771033 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1110543233/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 service list -o json
functional_test.go:1494: Took "834.285311ms" to run "out/minikube-linux-amd64 -p functional-771033 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.39.95:32198
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-771033 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.39.95:32198
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-771033
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-771033
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-771033
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (218.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-809683 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 
E0818 18:51:53.183237 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:52:20.884880 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-809683 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2 : (3m38.042239158s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (218.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-809683 -- rollout status deployment/busybox: (3.072439027s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-2977h -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-rqkkj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-zjbx9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-2977h -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-rqkkj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-zjbx9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-2977h -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-rqkkj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-zjbx9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-2977h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-2977h -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-rqkkj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-rqkkj -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-zjbx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-809683 -- exec busybox-7dff88458-zjbx9 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (63.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-809683 -v=7 --alsologtostderr
E0818 18:53:36.718508 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:36.724928 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:36.736344 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:36.757870 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:36.799420 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:36.880938 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:37.042512 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:37.364369 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:38.006066 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:39.287643 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:41.849388 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:46.970745 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:53:57.213050 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:54:17.694949 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-809683 -v=7 --alsologtostderr: (1m2.252372064s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (63.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-809683 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (12.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp testdata/cp-test.txt ha-809683:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1105273839/001/cp-test_ha-809683.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683:/home/docker/cp-test.txt ha-809683-m02:/home/docker/cp-test_ha-809683_ha-809683-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m02 "sudo cat /home/docker/cp-test_ha-809683_ha-809683-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683:/home/docker/cp-test.txt ha-809683-m03:/home/docker/cp-test_ha-809683_ha-809683-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m03 "sudo cat /home/docker/cp-test_ha-809683_ha-809683-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683:/home/docker/cp-test.txt ha-809683-m04:/home/docker/cp-test_ha-809683_ha-809683-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m04 "sudo cat /home/docker/cp-test_ha-809683_ha-809683-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp testdata/cp-test.txt ha-809683-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1105273839/001/cp-test_ha-809683-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m02:/home/docker/cp-test.txt ha-809683:/home/docker/cp-test_ha-809683-m02_ha-809683.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683 "sudo cat /home/docker/cp-test_ha-809683-m02_ha-809683.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m02:/home/docker/cp-test.txt ha-809683-m03:/home/docker/cp-test_ha-809683-m02_ha-809683-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m03 "sudo cat /home/docker/cp-test_ha-809683-m02_ha-809683-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m02:/home/docker/cp-test.txt ha-809683-m04:/home/docker/cp-test_ha-809683-m02_ha-809683-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m04 "sudo cat /home/docker/cp-test_ha-809683-m02_ha-809683-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp testdata/cp-test.txt ha-809683-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1105273839/001/cp-test_ha-809683-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m03:/home/docker/cp-test.txt ha-809683:/home/docker/cp-test_ha-809683-m03_ha-809683.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683 "sudo cat /home/docker/cp-test_ha-809683-m03_ha-809683.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m03:/home/docker/cp-test.txt ha-809683-m02:/home/docker/cp-test_ha-809683-m03_ha-809683-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m02 "sudo cat /home/docker/cp-test_ha-809683-m03_ha-809683-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m03:/home/docker/cp-test.txt ha-809683-m04:/home/docker/cp-test_ha-809683-m03_ha-809683-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m04 "sudo cat /home/docker/cp-test_ha-809683-m03_ha-809683-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp testdata/cp-test.txt ha-809683-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1105273839/001/cp-test_ha-809683-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m04:/home/docker/cp-test.txt ha-809683:/home/docker/cp-test_ha-809683-m04_ha-809683.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683 "sudo cat /home/docker/cp-test_ha-809683-m04_ha-809683.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m04:/home/docker/cp-test.txt ha-809683-m02:/home/docker/cp-test_ha-809683-m04_ha-809683-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m02 "sudo cat /home/docker/cp-test_ha-809683-m04_ha-809683-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 cp ha-809683-m04:/home/docker/cp-test.txt ha-809683-m03:/home/docker/cp-test_ha-809683-m04_ha-809683-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 ssh -n ha-809683-m03 "sudo cat /home/docker/cp-test_ha-809683-m04_ha-809683-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (12.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-809683 node stop m02 -v=7 --alsologtostderr: (13.309643151s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr: exit status 7 (609.496523ms)

                                                
                                                
-- stdout --
	ha-809683
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-809683-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-809683-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-809683-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 18:54:57.872496 1166622 out.go:345] Setting OutFile to fd 1 ...
	I0818 18:54:57.872593 1166622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:54:57.872601 1166622 out.go:358] Setting ErrFile to fd 2...
	I0818 18:54:57.872605 1166622 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 18:54:57.872811 1166622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	I0818 18:54:57.872969 1166622 out.go:352] Setting JSON to false
	I0818 18:54:57.872996 1166622 mustload.go:65] Loading cluster: ha-809683
	I0818 18:54:57.873098 1166622 notify.go:220] Checking for updates...
	I0818 18:54:57.873359 1166622 config.go:182] Loaded profile config "ha-809683": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 18:54:57.873381 1166622 status.go:255] checking status of ha-809683 ...
	I0818 18:54:57.873750 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:57.873809 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:57.888303 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43223
	I0818 18:54:57.888894 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:57.889527 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:57.889551 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:57.889906 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:57.890111 1166622 main.go:141] libmachine: (ha-809683) Calling .GetState
	I0818 18:54:57.891990 1166622 status.go:330] ha-809683 host status = "Running" (err=<nil>)
	I0818 18:54:57.892006 1166622 host.go:66] Checking if "ha-809683" exists ...
	I0818 18:54:57.892281 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:57.892325 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:57.907003 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32891
	I0818 18:54:57.907350 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:57.907748 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:57.907777 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:57.908074 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:57.908256 1166622 main.go:141] libmachine: (ha-809683) Calling .GetIP
	I0818 18:54:57.911124 1166622 main.go:141] libmachine: (ha-809683) DBG | domain ha-809683 has defined MAC address 52:54:00:60:e3:e5 in network mk-ha-809683
	I0818 18:54:57.911555 1166622 main.go:141] libmachine: (ha-809683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e3:e5", ip: ""} in network mk-ha-809683: {Iface:virbr1 ExpiryTime:2024-08-18 19:49:56 +0000 UTC Type:0 Mac:52:54:00:60:e3:e5 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-809683 Clientid:01:52:54:00:60:e3:e5}
	I0818 18:54:57.911591 1166622 main.go:141] libmachine: (ha-809683) DBG | domain ha-809683 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:e3:e5 in network mk-ha-809683
	I0818 18:54:57.911706 1166622 host.go:66] Checking if "ha-809683" exists ...
	I0818 18:54:57.912067 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:57.912110 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:57.927567 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46491
	I0818 18:54:57.928105 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:57.928612 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:57.928638 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:57.928961 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:57.929156 1166622 main.go:141] libmachine: (ha-809683) Calling .DriverName
	I0818 18:54:57.929414 1166622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 18:54:57.929456 1166622 main.go:141] libmachine: (ha-809683) Calling .GetSSHHostname
	I0818 18:54:57.932788 1166622 main.go:141] libmachine: (ha-809683) DBG | domain ha-809683 has defined MAC address 52:54:00:60:e3:e5 in network mk-ha-809683
	I0818 18:54:57.933351 1166622 main.go:141] libmachine: (ha-809683) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:60:e3:e5", ip: ""} in network mk-ha-809683: {Iface:virbr1 ExpiryTime:2024-08-18 19:49:56 +0000 UTC Type:0 Mac:52:54:00:60:e3:e5 Iaid: IPaddr:192.168.39.154 Prefix:24 Hostname:ha-809683 Clientid:01:52:54:00:60:e3:e5}
	I0818 18:54:57.933377 1166622 main.go:141] libmachine: (ha-809683) DBG | domain ha-809683 has defined IP address 192.168.39.154 and MAC address 52:54:00:60:e3:e5 in network mk-ha-809683
	I0818 18:54:57.933547 1166622 main.go:141] libmachine: (ha-809683) Calling .GetSSHPort
	I0818 18:54:57.933740 1166622 main.go:141] libmachine: (ha-809683) Calling .GetSSHKeyPath
	I0818 18:54:57.933914 1166622 main.go:141] libmachine: (ha-809683) Calling .GetSSHUsername
	I0818 18:54:57.934045 1166622 sshutil.go:53] new ssh client: &{IP:192.168.39.154 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/ha-809683/id_rsa Username:docker}
	I0818 18:54:58.016384 1166622 ssh_runner.go:195] Run: systemctl --version
	I0818 18:54:58.022553 1166622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:54:58.037357 1166622 kubeconfig.go:125] found "ha-809683" server: "https://192.168.39.254:8443"
	I0818 18:54:58.037386 1166622 api_server.go:166] Checking apiserver status ...
	I0818 18:54:58.037425 1166622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:54:58.051854 1166622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1942/cgroup
	W0818 18:54:58.061081 1166622 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1942/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 18:54:58.061143 1166622 ssh_runner.go:195] Run: ls
	I0818 18:54:58.065139 1166622 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 18:54:58.070169 1166622 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 18:54:58.070193 1166622 status.go:422] ha-809683 apiserver status = Running (err=<nil>)
	I0818 18:54:58.070205 1166622 status.go:257] ha-809683 status: &{Name:ha-809683 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 18:54:58.070225 1166622 status.go:255] checking status of ha-809683-m02 ...
	I0818 18:54:58.070521 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:58.070567 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:58.085507 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42359
	I0818 18:54:58.085885 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:58.086343 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:58.086363 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:58.086707 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:58.086932 1166622 main.go:141] libmachine: (ha-809683-m02) Calling .GetState
	I0818 18:54:58.088461 1166622 status.go:330] ha-809683-m02 host status = "Stopped" (err=<nil>)
	I0818 18:54:58.088477 1166622 status.go:343] host is not running, skipping remaining checks
	I0818 18:54:58.088485 1166622 status.go:257] ha-809683-m02 status: &{Name:ha-809683-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 18:54:58.088502 1166622 status.go:255] checking status of ha-809683-m03 ...
	I0818 18:54:58.088894 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:58.088931 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:58.103182 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42197
	I0818 18:54:58.103577 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:58.104089 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:58.104118 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:58.104438 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:58.104602 1166622 main.go:141] libmachine: (ha-809683-m03) Calling .GetState
	I0818 18:54:58.106216 1166622 status.go:330] ha-809683-m03 host status = "Running" (err=<nil>)
	I0818 18:54:58.106245 1166622 host.go:66] Checking if "ha-809683-m03" exists ...
	I0818 18:54:58.106632 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:58.106670 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:58.120660 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34527
	I0818 18:54:58.121094 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:58.121572 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:58.121596 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:58.121889 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:58.122055 1166622 main.go:141] libmachine: (ha-809683-m03) Calling .GetIP
	I0818 18:54:58.125298 1166622 main.go:141] libmachine: (ha-809683-m03) DBG | domain ha-809683-m03 has defined MAC address 52:54:00:b7:61:6b in network mk-ha-809683
	I0818 18:54:58.125787 1166622 main.go:141] libmachine: (ha-809683-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:61:6b", ip: ""} in network mk-ha-809683: {Iface:virbr1 ExpiryTime:2024-08-18 19:52:11 +0000 UTC Type:0 Mac:52:54:00:b7:61:6b Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-809683-m03 Clientid:01:52:54:00:b7:61:6b}
	I0818 18:54:58.125812 1166622 main.go:141] libmachine: (ha-809683-m03) DBG | domain ha-809683-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:b7:61:6b in network mk-ha-809683
	I0818 18:54:58.125965 1166622 host.go:66] Checking if "ha-809683-m03" exists ...
	I0818 18:54:58.126244 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:58.126279 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:58.140693 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39089
	I0818 18:54:58.141098 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:58.141531 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:58.141554 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:58.141856 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:58.142046 1166622 main.go:141] libmachine: (ha-809683-m03) Calling .DriverName
	I0818 18:54:58.142244 1166622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 18:54:58.142268 1166622 main.go:141] libmachine: (ha-809683-m03) Calling .GetSSHHostname
	I0818 18:54:58.144806 1166622 main.go:141] libmachine: (ha-809683-m03) DBG | domain ha-809683-m03 has defined MAC address 52:54:00:b7:61:6b in network mk-ha-809683
	I0818 18:54:58.145251 1166622 main.go:141] libmachine: (ha-809683-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:61:6b", ip: ""} in network mk-ha-809683: {Iface:virbr1 ExpiryTime:2024-08-18 19:52:11 +0000 UTC Type:0 Mac:52:54:00:b7:61:6b Iaid: IPaddr:192.168.39.157 Prefix:24 Hostname:ha-809683-m03 Clientid:01:52:54:00:b7:61:6b}
	I0818 18:54:58.145278 1166622 main.go:141] libmachine: (ha-809683-m03) DBG | domain ha-809683-m03 has defined IP address 192.168.39.157 and MAC address 52:54:00:b7:61:6b in network mk-ha-809683
	I0818 18:54:58.145409 1166622 main.go:141] libmachine: (ha-809683-m03) Calling .GetSSHPort
	I0818 18:54:58.145571 1166622 main.go:141] libmachine: (ha-809683-m03) Calling .GetSSHKeyPath
	I0818 18:54:58.145755 1166622 main.go:141] libmachine: (ha-809683-m03) Calling .GetSSHUsername
	I0818 18:54:58.145893 1166622 sshutil.go:53] new ssh client: &{IP:192.168.39.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/ha-809683-m03/id_rsa Username:docker}
	I0818 18:54:58.232346 1166622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:54:58.247713 1166622 kubeconfig.go:125] found "ha-809683" server: "https://192.168.39.254:8443"
	I0818 18:54:58.247745 1166622 api_server.go:166] Checking apiserver status ...
	I0818 18:54:58.247785 1166622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 18:54:58.261782 1166622 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1845/cgroup
	W0818 18:54:58.270393 1166622 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1845/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 18:54:58.270430 1166622 ssh_runner.go:195] Run: ls
	I0818 18:54:58.274458 1166622 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0818 18:54:58.278556 1166622 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0818 18:54:58.278583 1166622 status.go:422] ha-809683-m03 apiserver status = Running (err=<nil>)
	I0818 18:54:58.278595 1166622 status.go:257] ha-809683-m03 status: &{Name:ha-809683-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 18:54:58.278628 1166622 status.go:255] checking status of ha-809683-m04 ...
	I0818 18:54:58.278945 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:58.278987 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:58.294087 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37269
	I0818 18:54:58.294551 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:58.295059 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:58.295084 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:58.295390 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:58.295563 1166622 main.go:141] libmachine: (ha-809683-m04) Calling .GetState
	I0818 18:54:58.297274 1166622 status.go:330] ha-809683-m04 host status = "Running" (err=<nil>)
	I0818 18:54:58.297291 1166622 host.go:66] Checking if "ha-809683-m04" exists ...
	I0818 18:54:58.297607 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:58.297641 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:58.311957 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35637
	I0818 18:54:58.312366 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:58.312819 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:58.312839 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:58.313121 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:58.313380 1166622 main.go:141] libmachine: (ha-809683-m04) Calling .GetIP
	I0818 18:54:58.316220 1166622 main.go:141] libmachine: (ha-809683-m04) DBG | domain ha-809683-m04 has defined MAC address 52:54:00:19:7f:a4 in network mk-ha-809683
	I0818 18:54:58.316604 1166622 main.go:141] libmachine: (ha-809683-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:7f:a4", ip: ""} in network mk-ha-809683: {Iface:virbr1 ExpiryTime:2024-08-18 19:53:43 +0000 UTC Type:0 Mac:52:54:00:19:7f:a4 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-809683-m04 Clientid:01:52:54:00:19:7f:a4}
	I0818 18:54:58.316629 1166622 main.go:141] libmachine: (ha-809683-m04) DBG | domain ha-809683-m04 has defined IP address 192.168.39.190 and MAC address 52:54:00:19:7f:a4 in network mk-ha-809683
	I0818 18:54:58.316753 1166622 host.go:66] Checking if "ha-809683-m04" exists ...
	I0818 18:54:58.317056 1166622 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 18:54:58.317092 1166622 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 18:54:58.331780 1166622 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42141
	I0818 18:54:58.332257 1166622 main.go:141] libmachine: () Calling .GetVersion
	I0818 18:54:58.332818 1166622 main.go:141] libmachine: Using API Version  1
	I0818 18:54:58.332834 1166622 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 18:54:58.333218 1166622 main.go:141] libmachine: () Calling .GetMachineName
	I0818 18:54:58.333430 1166622 main.go:141] libmachine: (ha-809683-m04) Calling .DriverName
	I0818 18:54:58.333645 1166622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 18:54:58.333667 1166622 main.go:141] libmachine: (ha-809683-m04) Calling .GetSSHHostname
	I0818 18:54:58.336506 1166622 main.go:141] libmachine: (ha-809683-m04) DBG | domain ha-809683-m04 has defined MAC address 52:54:00:19:7f:a4 in network mk-ha-809683
	I0818 18:54:58.337021 1166622 main.go:141] libmachine: (ha-809683-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:19:7f:a4", ip: ""} in network mk-ha-809683: {Iface:virbr1 ExpiryTime:2024-08-18 19:53:43 +0000 UTC Type:0 Mac:52:54:00:19:7f:a4 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-809683-m04 Clientid:01:52:54:00:19:7f:a4}
	I0818 18:54:58.337047 1166622 main.go:141] libmachine: (ha-809683-m04) DBG | domain ha-809683-m04 has defined IP address 192.168.39.190 and MAC address 52:54:00:19:7f:a4 in network mk-ha-809683
	I0818 18:54:58.337233 1166622 main.go:141] libmachine: (ha-809683-m04) Calling .GetSSHPort
	I0818 18:54:58.337425 1166622 main.go:141] libmachine: (ha-809683-m04) Calling .GetSSHKeyPath
	I0818 18:54:58.337607 1166622 main.go:141] libmachine: (ha-809683-m04) Calling .GetSSHUsername
	I0818 18:54:58.337751 1166622 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/ha-809683-m04/id_rsa Username:docker}
	I0818 18:54:58.420564 1166622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 18:54:58.436473 1166622 status.go:257] ha-809683-m04 status: &{Name:ha-809683-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0818 18:54:58.657089 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (159.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 node start m02 -v=7 --alsologtostderr
E0818 18:56:20.579295 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:56:53.185166 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-809683 node start m02 -v=7 --alsologtostderr: (2m38.772386458s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (159.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (256.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-809683 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-809683 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-809683 -v=7 --alsologtostderr: (42.352406078s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-809683 --wait=true -v=7 --alsologtostderr
E0818 18:58:36.718000 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 18:59:04.421655 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:01:53.183796 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-809683 --wait=true -v=7 --alsologtostderr: (3m33.925314609s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-809683
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (256.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (7.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-809683 node delete m03 -v=7 --alsologtostderr: (6.427581555s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (7.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (39.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-809683 stop -v=7 --alsologtostderr: (38.982057491s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr: exit status 7 (100.124541ms)

                                                
                                                
-- stdout --
	ha-809683
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-809683-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-809683-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:02:41.969949 1169420 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:02:41.970058 1169420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:02:41.970067 1169420 out.go:358] Setting ErrFile to fd 2...
	I0818 19:02:41.970072 1169420 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:02:41.970269 1169420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	I0818 19:02:41.970416 1169420 out.go:352] Setting JSON to false
	I0818 19:02:41.970445 1169420 mustload.go:65] Loading cluster: ha-809683
	I0818 19:02:41.970579 1169420 notify.go:220] Checking for updates...
	I0818 19:02:41.970985 1169420 config.go:182] Loaded profile config "ha-809683": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 19:02:41.971006 1169420 status.go:255] checking status of ha-809683 ...
	I0818 19:02:41.971510 1169420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:02:41.971565 1169420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:41.989536 1169420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40557
	I0818 19:02:41.989927 1169420 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:41.990524 1169420 main.go:141] libmachine: Using API Version  1
	I0818 19:02:41.990556 1169420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:41.990876 1169420 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:41.991070 1169420 main.go:141] libmachine: (ha-809683) Calling .GetState
	I0818 19:02:41.992674 1169420 status.go:330] ha-809683 host status = "Stopped" (err=<nil>)
	I0818 19:02:41.992694 1169420 status.go:343] host is not running, skipping remaining checks
	I0818 19:02:41.992699 1169420 status.go:257] ha-809683 status: &{Name:ha-809683 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:02:41.992713 1169420 status.go:255] checking status of ha-809683-m02 ...
	I0818 19:02:41.992996 1169420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:02:41.993036 1169420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:42.007082 1169420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41233
	I0818 19:02:42.007516 1169420 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:42.007926 1169420 main.go:141] libmachine: Using API Version  1
	I0818 19:02:42.007964 1169420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:42.008303 1169420 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:42.008565 1169420 main.go:141] libmachine: (ha-809683-m02) Calling .GetState
	I0818 19:02:42.010154 1169420 status.go:330] ha-809683-m02 host status = "Stopped" (err=<nil>)
	I0818 19:02:42.010167 1169420 status.go:343] host is not running, skipping remaining checks
	I0818 19:02:42.010182 1169420 status.go:257] ha-809683-m02 status: &{Name:ha-809683-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:02:42.010198 1169420 status.go:255] checking status of ha-809683-m04 ...
	I0818 19:02:42.010476 1169420 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:02:42.010511 1169420 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:02:42.024425 1169420 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0818 19:02:42.024751 1169420 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:02:42.025183 1169420 main.go:141] libmachine: Using API Version  1
	I0818 19:02:42.025217 1169420 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:02:42.025530 1169420 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:02:42.025711 1169420 main.go:141] libmachine: (ha-809683-m04) Calling .GetState
	I0818 19:02:42.027165 1169420 status.go:330] ha-809683-m04 host status = "Stopped" (err=<nil>)
	I0818 19:02:42.027184 1169420 status.go:343] host is not running, skipping remaining checks
	I0818 19:02:42.027190 1169420 status.go:257] ha-809683-m04 status: &{Name:ha-809683-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (39.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (144.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-809683 --wait=true -v=7 --alsologtostderr --driver=kvm2 
E0818 19:03:16.246470 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:03:36.718613 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-809683 --wait=true -v=7 --alsologtostderr --driver=kvm2 : (2m23.549239325s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (144.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (82.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-809683 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-809683 --control-plane -v=7 --alsologtostderr: (1m21.302672791s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-809683 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (82.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.53s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (49.9s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-722297 --driver=kvm2 
E0818 19:06:53.185693 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-722297 --driver=kvm2 : (49.898843554s)
--- PASS: TestImageBuild/serial/Setup (49.90s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-722297
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-722297: (1.982370769s)
--- PASS: TestImageBuild/serial/NormalBuild (1.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (1.27s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-722297
image_test.go:99: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-722297: (1.271319814s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (1.27s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.01s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-722297
image_test.go:133: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-722297: (1.010694257s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.01s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-722297
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.77s)

                                                
                                    
x
+
TestJSONOutput/start/Command (65.68s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-274643 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-274643 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2 : (1m5.683089185s)
--- PASS: TestJSONOutput/start/Command (65.68s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-274643 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.52s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-274643 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.52s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.61s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-274643 --output=json --user=testUser
E0818 19:08:36.718039 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-274643 --output=json --user=testUser: (12.607172138s)
--- PASS: TestJSONOutput/stop/Command (12.61s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.19s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-999050 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-999050 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (57.9226ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"18b86edb-e0b3-4985-8dad-b4d53f32031f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-999050] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8133514f-84b0-448f-8868-431de0122559","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19423"}}
	{"specversion":"1.0","id":"abce34f9-cf86-453e-a1ed-965d98c138e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6b838deb-363f-4747-8e47-865448daccb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig"}}
	{"specversion":"1.0","id":"f971b25e-f823-4b7d-895a-dd69f23bc7f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube"}}
	{"specversion":"1.0","id":"8bfa3089-c5cf-4a99-a9d1-a30366add252","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"664f39fc-18d0-4626-ae95-f6ed60472b01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2b9dbbc9-43f7-4727-9a9f-636c0113d864","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-999050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-999050
--- PASS: TestErrorJSONOutput (0.19s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (100.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-147554 --driver=kvm2 
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-147554 --driver=kvm2 : (47.962765775s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-149738 --driver=kvm2 
E0818 19:09:59.784828 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-149738 --driver=kvm2 : (49.988445226s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-147554
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-149738
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-149738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-149738
helpers_test.go:175: Cleaning up "first-147554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-147554
--- PASS: TestMinikubeProfile (100.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (31.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-592047 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-592047 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2 : (30.052976339s)
--- PASS: TestMountStart/serial/StartWithMountFirst (31.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-592047 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-592047 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.38s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-607133 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-607133 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2 : (29.831201697s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607133 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607133 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-592047 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607133 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607133 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-607133
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-607133: (2.272422753s)
--- PASS: TestMountStart/serial/Stop (2.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-607133
E0818 19:11:53.184995 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-607133: (25.317129828s)
--- PASS: TestMountStart/serial/RestartStopped (26.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607133 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-607133 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (129.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517037 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 
E0818 19:13:36.718578 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-517037 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2 : (2m9.127071042s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (129.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-517037 -- rollout status deployment/busybox: (2.34854535s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-qwfx6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-sqk2g -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-qwfx6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-sqk2g -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-qwfx6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-sqk2g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-qwfx6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-qwfx6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-sqk2g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-517037 -- exec busybox-7dff88458-sqk2g -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.81s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (59.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-517037 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-517037 -v 3 --alsologtostderr: (59.251734285s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (59.79s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-517037 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp testdata/cp-test.txt multinode-517037:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3036568732/001/cp-test_multinode-517037.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037:/home/docker/cp-test.txt multinode-517037-m02:/home/docker/cp-test_multinode-517037_multinode-517037-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m02 "sudo cat /home/docker/cp-test_multinode-517037_multinode-517037-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037:/home/docker/cp-test.txt multinode-517037-m03:/home/docker/cp-test_multinode-517037_multinode-517037-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m03 "sudo cat /home/docker/cp-test_multinode-517037_multinode-517037-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp testdata/cp-test.txt multinode-517037-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3036568732/001/cp-test_multinode-517037-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037-m02:/home/docker/cp-test.txt multinode-517037:/home/docker/cp-test_multinode-517037-m02_multinode-517037.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037 "sudo cat /home/docker/cp-test_multinode-517037-m02_multinode-517037.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037-m02:/home/docker/cp-test.txt multinode-517037-m03:/home/docker/cp-test_multinode-517037-m02_multinode-517037-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m03 "sudo cat /home/docker/cp-test_multinode-517037-m02_multinode-517037-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp testdata/cp-test.txt multinode-517037-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3036568732/001/cp-test_multinode-517037-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037-m03:/home/docker/cp-test.txt multinode-517037:/home/docker/cp-test_multinode-517037-m03_multinode-517037.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037 "sudo cat /home/docker/cp-test_multinode-517037-m03_multinode-517037.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 cp multinode-517037-m03:/home/docker/cp-test.txt multinode-517037-m02:/home/docker/cp-test_multinode-517037-m03_multinode-517037-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 ssh -n multinode-517037-m02 "sudo cat /home/docker/cp-test_multinode-517037-m03_multinode-517037-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.11s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-517037 node stop m03: (2.446146448s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-517037 status: exit status 7 (408.635566ms)

                                                
                                                
-- stdout --
	multinode-517037
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-517037-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-517037-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-517037 status --alsologtostderr: exit status 7 (418.140189ms)

                                                
                                                
-- stdout --
	multinode-517037
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-517037-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-517037-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:15:27.441059 1177841 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:15:27.441179 1177841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:15:27.441187 1177841 out.go:358] Setting ErrFile to fd 2...
	I0818 19:15:27.441191 1177841 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:15:27.441382 1177841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	I0818 19:15:27.441544 1177841 out.go:352] Setting JSON to false
	I0818 19:15:27.441570 1177841 mustload.go:65] Loading cluster: multinode-517037
	I0818 19:15:27.441679 1177841 notify.go:220] Checking for updates...
	I0818 19:15:27.441938 1177841 config.go:182] Loaded profile config "multinode-517037": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 19:15:27.441953 1177841 status.go:255] checking status of multinode-517037 ...
	I0818 19:15:27.442297 1177841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:15:27.442349 1177841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:15:27.457699 1177841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41909
	I0818 19:15:27.458099 1177841 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:15:27.458767 1177841 main.go:141] libmachine: Using API Version  1
	I0818 19:15:27.458814 1177841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:15:27.459191 1177841 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:15:27.459368 1177841 main.go:141] libmachine: (multinode-517037) Calling .GetState
	I0818 19:15:27.461048 1177841 status.go:330] multinode-517037 host status = "Running" (err=<nil>)
	I0818 19:15:27.461065 1177841 host.go:66] Checking if "multinode-517037" exists ...
	I0818 19:15:27.461371 1177841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:15:27.461406 1177841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:15:27.476115 1177841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44487
	I0818 19:15:27.476587 1177841 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:15:27.477043 1177841 main.go:141] libmachine: Using API Version  1
	I0818 19:15:27.477066 1177841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:15:27.477428 1177841 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:15:27.477667 1177841 main.go:141] libmachine: (multinode-517037) Calling .GetIP
	I0818 19:15:27.480274 1177841 main.go:141] libmachine: (multinode-517037) DBG | domain multinode-517037 has defined MAC address 52:54:00:5c:81:35 in network mk-multinode-517037
	I0818 19:15:27.480742 1177841 main.go:141] libmachine: (multinode-517037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:35", ip: ""} in network mk-multinode-517037: {Iface:virbr1 ExpiryTime:2024-08-18 20:12:17 +0000 UTC Type:0 Mac:52:54:00:5c:81:35 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-517037 Clientid:01:52:54:00:5c:81:35}
	I0818 19:15:27.480775 1177841 main.go:141] libmachine: (multinode-517037) DBG | domain multinode-517037 has defined IP address 192.168.39.48 and MAC address 52:54:00:5c:81:35 in network mk-multinode-517037
	I0818 19:15:27.480891 1177841 host.go:66] Checking if "multinode-517037" exists ...
	I0818 19:15:27.481155 1177841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:15:27.481198 1177841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:15:27.495691 1177841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I0818 19:15:27.496105 1177841 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:15:27.496603 1177841 main.go:141] libmachine: Using API Version  1
	I0818 19:15:27.496622 1177841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:15:27.496919 1177841 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:15:27.497092 1177841 main.go:141] libmachine: (multinode-517037) Calling .DriverName
	I0818 19:15:27.497309 1177841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:15:27.497333 1177841 main.go:141] libmachine: (multinode-517037) Calling .GetSSHHostname
	I0818 19:15:27.500042 1177841 main.go:141] libmachine: (multinode-517037) DBG | domain multinode-517037 has defined MAC address 52:54:00:5c:81:35 in network mk-multinode-517037
	I0818 19:15:27.500439 1177841 main.go:141] libmachine: (multinode-517037) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5c:81:35", ip: ""} in network mk-multinode-517037: {Iface:virbr1 ExpiryTime:2024-08-18 20:12:17 +0000 UTC Type:0 Mac:52:54:00:5c:81:35 Iaid: IPaddr:192.168.39.48 Prefix:24 Hostname:multinode-517037 Clientid:01:52:54:00:5c:81:35}
	I0818 19:15:27.500464 1177841 main.go:141] libmachine: (multinode-517037) DBG | domain multinode-517037 has defined IP address 192.168.39.48 and MAC address 52:54:00:5c:81:35 in network mk-multinode-517037
	I0818 19:15:27.500577 1177841 main.go:141] libmachine: (multinode-517037) Calling .GetSSHPort
	I0818 19:15:27.500771 1177841 main.go:141] libmachine: (multinode-517037) Calling .GetSSHKeyPath
	I0818 19:15:27.500912 1177841 main.go:141] libmachine: (multinode-517037) Calling .GetSSHUsername
	I0818 19:15:27.501057 1177841 sshutil.go:53] new ssh client: &{IP:192.168.39.48 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/multinode-517037/id_rsa Username:docker}
	I0818 19:15:27.580281 1177841 ssh_runner.go:195] Run: systemctl --version
	I0818 19:15:27.586283 1177841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:15:27.600468 1177841 kubeconfig.go:125] found "multinode-517037" server: "https://192.168.39.48:8443"
	I0818 19:15:27.600498 1177841 api_server.go:166] Checking apiserver status ...
	I0818 19:15:27.600528 1177841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0818 19:15:27.613541 1177841 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1833/cgroup
	W0818 19:15:27.627320 1177841 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1833/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0818 19:15:27.627368 1177841 ssh_runner.go:195] Run: ls
	I0818 19:15:27.631864 1177841 api_server.go:253] Checking apiserver healthz at https://192.168.39.48:8443/healthz ...
	I0818 19:15:27.636108 1177841 api_server.go:279] https://192.168.39.48:8443/healthz returned 200:
	ok
	I0818 19:15:27.636140 1177841 status.go:422] multinode-517037 apiserver status = Running (err=<nil>)
	I0818 19:15:27.636150 1177841 status.go:257] multinode-517037 status: &{Name:multinode-517037 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:15:27.636167 1177841 status.go:255] checking status of multinode-517037-m02 ...
	I0818 19:15:27.636508 1177841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:15:27.636545 1177841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:15:27.652112 1177841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46577
	I0818 19:15:27.652517 1177841 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:15:27.652909 1177841 main.go:141] libmachine: Using API Version  1
	I0818 19:15:27.652935 1177841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:15:27.653285 1177841 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:15:27.653532 1177841 main.go:141] libmachine: (multinode-517037-m02) Calling .GetState
	I0818 19:15:27.655053 1177841 status.go:330] multinode-517037-m02 host status = "Running" (err=<nil>)
	I0818 19:15:27.655070 1177841 host.go:66] Checking if "multinode-517037-m02" exists ...
	I0818 19:15:27.655372 1177841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:15:27.655412 1177841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:15:27.670657 1177841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36251
	I0818 19:15:27.671084 1177841 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:15:27.671567 1177841 main.go:141] libmachine: Using API Version  1
	I0818 19:15:27.671594 1177841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:15:27.671963 1177841 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:15:27.672176 1177841 main.go:141] libmachine: (multinode-517037-m02) Calling .GetIP
	I0818 19:15:27.674876 1177841 main.go:141] libmachine: (multinode-517037-m02) DBG | domain multinode-517037-m02 has defined MAC address 52:54:00:8c:75:4a in network mk-multinode-517037
	I0818 19:15:27.675354 1177841 main.go:141] libmachine: (multinode-517037-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:75:4a", ip: ""} in network mk-multinode-517037: {Iface:virbr1 ExpiryTime:2024-08-18 20:13:27 +0000 UTC Type:0 Mac:52:54:00:8c:75:4a Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-517037-m02 Clientid:01:52:54:00:8c:75:4a}
	I0818 19:15:27.675383 1177841 main.go:141] libmachine: (multinode-517037-m02) DBG | domain multinode-517037-m02 has defined IP address 192.168.39.129 and MAC address 52:54:00:8c:75:4a in network mk-multinode-517037
	I0818 19:15:27.675502 1177841 host.go:66] Checking if "multinode-517037-m02" exists ...
	I0818 19:15:27.675825 1177841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:15:27.675872 1177841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:15:27.690844 1177841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37617
	I0818 19:15:27.691237 1177841 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:15:27.691691 1177841 main.go:141] libmachine: Using API Version  1
	I0818 19:15:27.691711 1177841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:15:27.691970 1177841 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:15:27.692160 1177841 main.go:141] libmachine: (multinode-517037-m02) Calling .DriverName
	I0818 19:15:27.692330 1177841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0818 19:15:27.692350 1177841 main.go:141] libmachine: (multinode-517037-m02) Calling .GetSSHHostname
	I0818 19:15:27.695156 1177841 main.go:141] libmachine: (multinode-517037-m02) DBG | domain multinode-517037-m02 has defined MAC address 52:54:00:8c:75:4a in network mk-multinode-517037
	I0818 19:15:27.695577 1177841 main.go:141] libmachine: (multinode-517037-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:75:4a", ip: ""} in network mk-multinode-517037: {Iface:virbr1 ExpiryTime:2024-08-18 20:13:27 +0000 UTC Type:0 Mac:52:54:00:8c:75:4a Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:multinode-517037-m02 Clientid:01:52:54:00:8c:75:4a}
	I0818 19:15:27.695606 1177841 main.go:141] libmachine: (multinode-517037-m02) DBG | domain multinode-517037-m02 has defined IP address 192.168.39.129 and MAC address 52:54:00:8c:75:4a in network mk-multinode-517037
	I0818 19:15:27.695814 1177841 main.go:141] libmachine: (multinode-517037-m02) Calling .GetSSHPort
	I0818 19:15:27.695978 1177841 main.go:141] libmachine: (multinode-517037-m02) Calling .GetSSHKeyPath
	I0818 19:15:27.696141 1177841 main.go:141] libmachine: (multinode-517037-m02) Calling .GetSSHUsername
	I0818 19:15:27.696320 1177841 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/19423-1145725/.minikube/machines/multinode-517037-m02/id_rsa Username:docker}
	I0818 19:15:27.780548 1177841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0818 19:15:27.794057 1177841 status.go:257] multinode-517037-m02 status: &{Name:multinode-517037-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:15:27.794095 1177841 status.go:255] checking status of multinode-517037-m03 ...
	I0818 19:15:27.794541 1177841 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:15:27.794605 1177841 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:15:27.809905 1177841 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35733
	I0818 19:15:27.810297 1177841 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:15:27.810796 1177841 main.go:141] libmachine: Using API Version  1
	I0818 19:15:27.810824 1177841 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:15:27.811198 1177841 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:15:27.811421 1177841 main.go:141] libmachine: (multinode-517037-m03) Calling .GetState
	I0818 19:15:27.813089 1177841 status.go:330] multinode-517037-m03 host status = "Stopped" (err=<nil>)
	I0818 19:15:27.813108 1177841 status.go:343] host is not running, skipping remaining checks
	I0818 19:15:27.813116 1177841 status.go:257] multinode-517037-m03 status: &{Name:multinode-517037-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (42.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-517037 node start m03 -v=7 --alsologtostderr: (41.628715672s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (42.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (190.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-517037
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-517037
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-517037: (28.15180036s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517037 --wait=true -v=8 --alsologtostderr
E0818 19:16:53.185665 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:18:36.718122 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-517037 --wait=true -v=8 --alsologtostderr: (2m42.741730238s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-517037
--- PASS: TestMultiNode/serial/RestartKeepsNodes (190.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-517037 node delete m03: (1.558726572s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (25.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-517037 stop: (25.060202071s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-517037 status: exit status 7 (83.185841ms)

                                                
                                                
-- stdout --
	multinode-517037
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-517037-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-517037 status --alsologtostderr: exit status 7 (81.705752ms)

                                                
                                                
-- stdout --
	multinode-517037
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-517037-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0818 19:19:48.305623 1180106 out.go:345] Setting OutFile to fd 1 ...
	I0818 19:19:48.305890 1180106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:19:48.305902 1180106 out.go:358] Setting ErrFile to fd 2...
	I0818 19:19:48.305908 1180106 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0818 19:19:48.306100 1180106 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19423-1145725/.minikube/bin
	I0818 19:19:48.306296 1180106 out.go:352] Setting JSON to false
	I0818 19:19:48.306331 1180106 mustload.go:65] Loading cluster: multinode-517037
	I0818 19:19:48.306816 1180106 notify.go:220] Checking for updates...
	I0818 19:19:48.307761 1180106 config.go:182] Loaded profile config "multinode-517037": Driver=kvm2, ContainerRuntime=docker, KubernetesVersion=v1.31.0
	I0818 19:19:48.307790 1180106 status.go:255] checking status of multinode-517037 ...
	I0818 19:19:48.308284 1180106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:19:48.308320 1180106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:19:48.323267 1180106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46297
	I0818 19:19:48.323742 1180106 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:19:48.324359 1180106 main.go:141] libmachine: Using API Version  1
	I0818 19:19:48.324377 1180106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:19:48.324699 1180106 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:19:48.324880 1180106 main.go:141] libmachine: (multinode-517037) Calling .GetState
	I0818 19:19:48.326256 1180106 status.go:330] multinode-517037 host status = "Stopped" (err=<nil>)
	I0818 19:19:48.326277 1180106 status.go:343] host is not running, skipping remaining checks
	I0818 19:19:48.326283 1180106 status.go:257] multinode-517037 status: &{Name:multinode-517037 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0818 19:19:48.326338 1180106 status.go:255] checking status of multinode-517037-m02 ...
	I0818 19:19:48.326644 1180106 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_integration/out/docker-machine-driver-kvm2
	I0818 19:19:48.326682 1180106 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0818 19:19:48.341534 1180106 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0818 19:19:48.341979 1180106 main.go:141] libmachine: () Calling .GetVersion
	I0818 19:19:48.342484 1180106 main.go:141] libmachine: Using API Version  1
	I0818 19:19:48.342505 1180106 main.go:141] libmachine: () Calling .SetConfigRaw
	I0818 19:19:48.342896 1180106 main.go:141] libmachine: () Calling .GetMachineName
	I0818 19:19:48.343100 1180106 main.go:141] libmachine: (multinode-517037-m02) Calling .GetState
	I0818 19:19:48.344753 1180106 status.go:330] multinode-517037-m02 host status = "Stopped" (err=<nil>)
	I0818 19:19:48.344769 1180106 status.go:343] host is not running, skipping remaining checks
	I0818 19:19:48.344775 1180106 status.go:257] multinode-517037-m02 status: &{Name:multinode-517037-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (25.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (119.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517037 --wait=true -v=8 --alsologtostderr --driver=kvm2 
E0818 19:19:56.248422 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-517037 --wait=true -v=8 --alsologtostderr --driver=kvm2 : (1m59.332329853s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-517037 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (119.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (62.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-517037
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517037-m02 --driver=kvm2 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-517037-m02 --driver=kvm2 : exit status 14 (59.903521ms)

                                                
                                                
-- stdout --
	* [multinode-517037-m02] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-517037-m02' is duplicated with machine name 'multinode-517037-m02' in profile 'multinode-517037'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-517037-m03 --driver=kvm2 
E0818 19:21:53.182931 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-517037-m03 --driver=kvm2 : (51.380479144s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-517037
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-517037: exit status 80 (227.907381ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-517037 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-517037-m03 already exists in multinode-517037-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_6.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-517037-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-517037-m03: (10.83905776s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (62.55s)

                                                
                                    
x
+
TestPreload (304.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-089912 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4
E0818 19:23:36.718690 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-089912 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.24.4: (2m7.133242272s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-089912 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-089912 image pull gcr.io/k8s-minikube/busybox: (1.244876544s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-089912
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-089912: (12.604532524s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-089912 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 
E0818 19:26:39.787865 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:26:53.185859 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-089912 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2 : (2m42.228512377s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-089912 image list
helpers_test.go:175: Cleaning up "test-preload-089912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-089912
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-089912: (1.064857056s)
--- PASS: TestPreload (304.48s)

                                                
                                    
x
+
TestScheduledStopUnix (122.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-974414 --memory=2048 --driver=kvm2 
E0818 19:28:36.718128 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-974414 --memory=2048 --driver=kvm2 : (51.073153004s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-974414 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-974414 -n scheduled-stop-974414
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-974414 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-974414 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-974414 -n scheduled-stop-974414
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-974414
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-974414 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-974414
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-974414: exit status 7 (66.850927ms)

                                                
                                                
-- stdout --
	scheduled-stop-974414
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-974414 -n scheduled-stop-974414
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-974414 -n scheduled-stop-974414: exit status 7 (65.72372ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-974414" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-974414
--- PASS: TestScheduledStopUnix (122.71s)

                                                
                                    
x
+
TestSkaffold (129.21s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe570376211 version
skaffold_test.go:63: skaffold version: v2.13.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-532054 --memory=2600 --driver=kvm2 
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-532054 --memory=2600 --driver=kvm2 : (48.900639836s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/KVM_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe570376211 run --minikube-profile skaffold-532054 --kube-context skaffold-532054 --status-check=true --port-forward=false --interactive=false
E0818 19:31:53.185111 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe570376211 run --minikube-profile skaffold-532054 --kube-context skaffold-532054 --status-check=true --port-forward=false --interactive=false: (1m7.231963398s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-56ffb49f9-fmn97" [c93a8345-e11d-41ef-8b95-200b6196c165] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004257841s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5bdd79dccb-x6fk9" [f18de748-7121-4be7-9020-aaa049750625] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004349907s
helpers_test.go:175: Cleaning up "skaffold-532054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-532054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-532054: (1.23775225s)
--- PASS: TestSkaffold (129.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (238.9s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.222590468 start -p running-upgrade-774498 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.222590468 start -p running-upgrade-774498 --memory=2200 --vm-driver=kvm2 : (2m10.386495344s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-774498 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-774498 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m46.996469526s)
helpers_test.go:175: Cleaning up "running-upgrade-774498" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-774498
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-774498: (1.148525939s)
--- PASS: TestRunningBinaryUpgrade (238.90s)

                                                
                                    
x
+
TestKubernetesUpgrade (203.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-636516 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-636516 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2 : (1m25.951351281s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-636516
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-636516: (4.826542208s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-636516 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-636516 status --format={{.Host}}: exit status 7 (87.332139ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-636516 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-636516 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (46.70367397s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-636516 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-636516 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-636516 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2 : exit status 106 (83.115574ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-636516] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-636516
	    minikube start -p kubernetes-upgrade-636516 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6365162 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-636516 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-636516 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 
E0818 19:36:53.183956 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:56.527559 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:56.534034 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:56.545375 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:56.566711 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:56.608066 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:56.689537 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:56.851481 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:57.173286 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:57.815306 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:36:59.097023 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:01.658922 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:06.781166 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:37:17.023247 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-636516 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=kvm2 : (54.909642157s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-636516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-636516
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-636516: (10.994317159s)
--- PASS: TestKubernetesUpgrade (203.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (135.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.632710760 start -p stopped-upgrade-573242 --memory=2200 --vm-driver=kvm2 
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.632710760 start -p stopped-upgrade-573242 --memory=2200 --vm-driver=kvm2 : (1m7.886057765s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.632710760 -p stopped-upgrade-573242 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.632710760 -p stopped-upgrade-573242 stop: (4.378043567s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-573242 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 
E0818 19:36:36.250527 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-573242 --memory=2200 --alsologtostderr -v=1 --driver=kvm2 : (1m3.149223363s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (135.41s)

                                                
                                    
x
+
TestPause/serial/Start (70.62s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-167099 --memory=2048 --install-addons=false --wait=all --driver=kvm2 
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-167099 --memory=2048 --install-addons=false --wait=all --driver=kvm2 : (1m10.624371159s)
--- PASS: TestPause/serial/Start (70.62s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (63.36s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-167099 --alsologtostderr -v=1 --driver=kvm2 
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-167099 --alsologtostderr -v=1 --driver=kvm2 : (1m3.331341649s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (63.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-573242
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-573242: (1.138948902s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932889 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-932889 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2 : exit status 14 (60.815203ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-932889] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19423
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19423-1145725/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19423-1145725/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (78.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932889 --driver=kvm2 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-932889 --driver=kvm2 : (1m18.678465777s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-932889 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (78.96s)

                                                
                                    
x
+
TestPause/serial/Pause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-167099 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.70s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-167099 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-167099 --output=json --layout=cluster: exit status 2 (271.57001ms)

                                                
                                                
-- stdout --
	{"Name":"pause-167099","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-167099","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.27s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-167099 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.66s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-167099 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.66s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-167099 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-167099 --alsologtostderr -v=5: (1.055212562s)
--- PASS: TestPause/serial/DeletePaused (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.2s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (15.201957354s)
--- PASS: TestPause/serial/VerifyDeletedResources (15.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (62.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 
E0818 19:38:36.717770 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2 : (1m2.222898603s)
--- PASS: TestNetworkPlugins/group/auto/Start (62.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (101.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2 : (1m41.670284728s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (101.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (45.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-932889 --no-kubernetes --driver=kvm2 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-932889 --no-kubernetes --driver=kvm2 : (44.112249856s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-932889 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-932889 status -o json: exit status 2 (225.831495ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-932889","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-932889
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-932889: (1.040656494s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (45.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (109.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2 : (1m49.836867799s)
--- PASS: TestNetworkPlugins/group/calico/Start (109.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4jc8q" [9d8f01b0-1ea9-4dcb-8d54-722fdf18caec] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0818 19:39:40.389377 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-4jc8q" [9d8f01b0-1ea9-4dcb-8d54-722fdf18caec] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004345327s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (26.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-911019 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context auto-911019 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.179292871s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context auto-911019 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context auto-911019 exec deployment/netcat -- nslookup kubernetes.default: (10.180976424s)
--- PASS: TestNetworkPlugins/group/auto/DNS (26.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rhl4w" [f4e2496c-d8f7-4cd7-9809-27528d80eb1f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005932718s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mx9bm" [9e7f9cc8-c8d6-410e-996a-191aa034cc20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mx9bm" [9e7f9cc8-c8d6-410e-996a-191aa034cc20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004449038s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (71.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2 : (1m11.682895048s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (71.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-911019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (100.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=kvm2 : (1m40.880627934s)
--- PASS: TestNetworkPlugins/group/false/Start (100.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gkcnm" [fc600f15-ba0a-4314-8f9b-26f745436aa6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005531129s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l4qf2" [21ef92f5-2ce4-4304-b572-f4a7421ce7f3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l4qf2" [21ef92f5-2ce4-4304-b572-f4a7421ce7f3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.006032878s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-911019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-932889 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-932889 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.734991ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (59.85s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-932889
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-932889: (59.853019621s)
--- PASS: TestNoKubernetes/serial/Stop (59.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2 : (1m40.551072896s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cwpzc" [78c883c9-d84e-4677-b922-a2be0b5e40fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cwpzc" [78c883c9-d84e-4677-b922-a2be0b5e40fa] Running
E0818 19:41:53.183235 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004640238s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-911019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0818 19:41:56.527791 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (74.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 
E0818 19:42:24.230997 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2 : (1m14.782643253s)
--- PASS: TestNetworkPlugins/group/flannel/Start (74.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z94bf" [e81a12db-5d2f-4a74-9662-902a3c1ef982] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z94bf" [e81a12db-5d2f-4a74-9662-902a3c1ef982] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.00505091s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (103.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2 : (1m43.064504212s)
--- PASS: TestNetworkPlugins/group/bridge/Start (103.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-911019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (108.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-911019 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=kvm2 : (1m48.640753046s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (108.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8btpl" [bb69f93a-a215-4095-927f-f6ae05d3261e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8btpl" [bb69f93a-a215-4095-927f-f6ae05d3261e] Running
E0818 19:43:19.789934 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005146343s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-911019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-kn5lf" [2f0e8f91-8f39-425b-af12-75e896435469] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004504492s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-brxhr" [b3dfe517-9a68-49a2-aec3-d5db68ae605a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0818 19:43:36.717841 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-brxhr" [b3dfe517-9a68-49a2-aec3-d5db68ae605a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.004754344s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (133.96s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-563513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-563513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (2m13.955129973s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (133.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-911019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (118.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-360394 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-360394 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (1m58.957043994s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (118.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hgkqp" [07055ab3-e9f8-4f05-bfd8-a556ffd54cfb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hgkqp" [07055ab3-e9f8-4f05-bfd8-a556ffd54cfb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.007609798s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-911019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-911019 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-911019 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-tsnhn" [5ed783d3-89eb-446d-82b0-f4e188beae74] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-tsnhn" [5ed783d3-89eb-446d-82b0-f4e188beae74] Running
E0818 19:45:00.252734 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/auto-911019/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004954722s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (70.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-556545 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-556545 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (1m10.333455378s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (70.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-911019 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-911019 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-064010 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
E0818 19:45:22.006946 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:22.013357 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:22.024762 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:22.046273 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:22.087735 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:22.169240 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:22.331565 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:22.653579 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:23.294938 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:24.576311 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:27.138133 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:32.260092 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:42.502449 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-064010 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (1m12.272431276s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-563513 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6501f576-a5bc-4e60-8489-320756f94dd4] Pending
helpers_test.go:344: "busybox" [6501f576-a5bc-4e60-8489-320756f94dd4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0818 19:45:56.906546 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:56.912988 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:56.924527 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:56.946029 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:56.987483 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:57.069674 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:57.231448 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:57.552856 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:45:58.195188 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [6501f576-a5bc-4e60-8489-320756f94dd4] Running
E0818 19:45:59.476674 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:01.696109 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/auto-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:02.038414 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:02.984841 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.006917741s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-563513 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-360394 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [237da081-6edb-4bc8-92ac-afed1734f053] Pending
helpers_test.go:344: "busybox" [237da081-6edb-4bc8-92ac-afed1734f053] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [237da081-6edb-4bc8-92ac-afed1734f053] Running
E0818 19:46:07.159913 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005484936s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-360394 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-563513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-563513 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.043973911s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-563513 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-556545 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [154e13a8-cd47-475b-beac-e48e72867a3d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [154e13a8-cd47-475b-beac-e48e72867a3d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00472864s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-556545 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-563513 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-563513 --alsologtostderr -v=3: (13.344115367s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-360394 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-360394 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-360394 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-360394 --alsologtostderr -v=3: (13.365630239s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (13.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-556545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-556545 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.040490199s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-556545 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-556545 --alsologtostderr -v=3
E0818 19:46:17.402197 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-556545 --alsologtostderr -v=3: (13.381614039s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-563513 -n old-k8s-version-563513
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-563513 -n old-k8s-version-563513: exit status 7 (75.847091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-563513 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (404.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-563513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-563513 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --kubernetes-version=v1.20.0: (6m44.394063536s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-563513 -n old-k8s-version-563513
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (404.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-360394 -n no-preload-360394
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-360394 -n no-preload-360394: exit status 7 (89.426927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-360394 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (314.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-360394 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-360394 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --kubernetes-version=v1.31.0: (5m14.195043605s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-360394 -n no-preload-360394
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (314.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-556545 -n embed-certs-556545
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-556545 -n embed-certs-556545: exit status 7 (65.699406ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-556545 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (334.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-556545 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-556545 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --kubernetes-version=v1.31.0: (5m33.786047313s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-556545 -n embed-certs-556545
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (334.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-064010 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [abc859fd-b28e-4547-9e14-ac0d135a31df] Pending
helpers_test.go:344: "busybox" [abc859fd-b28e-4547-9e14-ac0d135a31df] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [abc859fd-b28e-4547-9e14-ac0d135a31df] Running
E0818 19:46:37.884400 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.005303066s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-064010 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-064010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-064010 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (13.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-064010 --alsologtostderr -v=3
E0818 19:46:43.947268 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:46.214149 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:46.221112 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:46.232538 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:46.254049 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:46.295557 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:46.377148 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:46.539232 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:46.861348 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:47.503767 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:48.785572 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:51.347986 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:53.183221 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:56.469831 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:46:56.527492 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/skaffold-532054/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-064010 --alsologtostderr -v=3: (13.378560347s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (13.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010: exit status 7 (63.66328ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-064010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (329.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-064010 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0
E0818 19:47:06.711807 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:18.846602 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:23.618495 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/auto-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:27.193448 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:37.458367 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:37.464905 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:37.476391 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:37.498038 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:37.539549 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:37.621073 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:37.782655 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:38.104580 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:38.746288 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:40.028617 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:42.590500 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:47.712165 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:47:57.954083 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:05.869463 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:08.155233 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:12.740794 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:12.747195 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:12.758580 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:12.780010 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:12.821465 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:12.903056 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:13.064675 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:13.386642 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:14.028553 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:15.310902 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:17.872781 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:18.436509 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:22.994268 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:28.387701 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:28.394095 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:28.405543 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:28.426928 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:28.468402 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:28.549885 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:28.711445 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:29.032885 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:29.675113 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:30.957460 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:33.236025 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:33.518868 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:36.718725 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/functional-771033/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:38.641340 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:40.768803 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:48.882980 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:53.717699 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:48:59.398436 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:09.364321 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:26.393450 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:26.399890 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:26.411275 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:26.432713 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:26.474143 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:26.555779 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:26.717371 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:27.039304 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:27.681071 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:28.962491 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:30.076582 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:31.523789 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:34.679568 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:36.645369 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:39.758398 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/auto-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:46.887346 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:50.326127 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:53.380660 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:53.387092 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:53.398557 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:53.419999 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:53.461520 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:53.543169 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:53.705524 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:54.027224 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:54.668649 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:55.950753 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:49:58.512704 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:03.634746 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:07.368734 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:07.460405 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/auto-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:13.876182 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:21.320185 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:22.007302 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:34.358498 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:48.330959 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:49.711661 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kindnet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:56.601310 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:50:56.906957 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:51:12.248494 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:51:15.319891 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:51:24.610425 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/calico-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-064010 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --kubernetes-version=v1.31.0: (5m29.122101204s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (329.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-75czt" [034d1cd5-4ed0-4017-8170-034aaef4c1e7] Running
E0818 19:51:46.214792 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004611716s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-75czt" [034d1cd5-4ed0-4017-8170-034aaef4c1e7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00474487s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-360394 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-360394 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-360394 --alsologtostderr -v=1
E0818 19:51:53.183614 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-360394 -n no-preload-360394
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-360394 -n no-preload-360394: exit status 2 (252.845758ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-360394 -n no-preload-360394
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-360394 -n no-preload-360394: exit status 2 (244.060602ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-360394 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-360394 -n no-preload-360394
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-360394 -n no-preload-360394
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (55.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-104543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-104543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (55.894852248s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (55.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kn29j" [4fe85348-cf3a-4746-b02c-48596ce4e5af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005123361s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kn29j" [4fe85348-cf3a-4746-b02c-48596ce4e5af] Running
E0818 19:52:10.252641 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/bridge-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005378294s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-556545 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-556545 image list --format=json
E0818 19:52:13.918692 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/custom-flannel-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-556545 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-556545 -n embed-certs-556545
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-556545 -n embed-certs-556545: exit status 2 (256.431491ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-556545 -n embed-certs-556545
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-556545 -n embed-certs-556545: exit status 2 (249.542134ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-556545 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-556545 -n embed-certs-556545
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-556545 -n embed-certs-556545
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m4grl" [be0b4dfd-148f-4306-936b-542e4dcb955f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004766428s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-m4grl" [be0b4dfd-148f-4306-936b-542e4dcb955f] Running
E0818 19:52:37.241706 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/kubenet-911019/client.crt: no such file or directory" logger="UnhandledError"
E0818 19:52:37.458157 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004850114s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-064010 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-064010 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-064010 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010: exit status 2 (235.614771ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010: exit status 2 (244.784427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-064010 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-064010 -n default-k8s-diff-port-064010
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.54s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-104543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-104543 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.080389573s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (13.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-104543 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-104543 --alsologtostderr -v=3: (13.322702614s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (13.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vltlb" [5cf2c0f0-74fe-4eeb-8ac6-b65e8f73aa2b] Running
E0818 19:53:05.162127 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/false-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004497006s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104543 -n newest-cni-104543
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104543 -n newest-cni-104543: exit status 7 (62.914214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-104543 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (36.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-104543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-104543 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --kubernetes-version=v1.31.0: (36.493548573s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-104543 -n newest-cni-104543
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (36.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-vltlb" [5cf2c0f0-74fe-4eeb-8ac6-b65e8f73aa2b] Running
E0818 19:53:12.740842 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/enable-default-cni-911019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00536265s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-563513 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-563513 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-563513 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-563513 -n old-k8s-version-563513
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-563513 -n old-k8s-version-563513: exit status 2 (243.545029ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-563513 -n old-k8s-version-563513
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-563513 -n old-k8s-version-563513: exit status 2 (240.28185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-563513 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-563513 -n old-k8s-version-563513
E0818 19:53:16.251915 1152900 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19423-1145725/.minikube/profiles/addons-058019/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-563513 -n old-k8s-version-563513
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-104543 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-104543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104543 -n newest-cni-104543
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104543 -n newest-cni-104543: exit status 2 (228.959398ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104543 -n newest-cni-104543
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104543 -n newest-cni-104543: exit status 2 (231.379451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-104543 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-104543 -n newest-cni-104543
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-104543 -n newest-cni-104543
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.18s)

                                                
                                    

Test skip (31/340)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-911019 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-911019" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-911019

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-911019" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-911019"

                                                
                                                
----------------------- debugLogs end: cilium-911019 [took: 3.459789717s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-911019" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-911019
--- SKIP: TestNetworkPlugins/group/cilium (3.61s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-526895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-526895
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard