Test Report: KVM_Linux_containerd 15909

                    
                      e35e2c770ef92dfe730882c95f60d10525bed15b:2023-02-23:28027
                    
                

Test fail (1/292)

Order failed test Duration
205 TestPreload 357.55
x
+
TestPreload (357.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-113143 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0223 05:04:27.391311   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-113143 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (2m3.684060563s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-113143 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-113143 -- sudo crictl pull gcr.io/k8s-minikube/busybox: (2.401288624s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-113143
E0223 05:05:45.125780   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 05:06:24.343379   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-113143: (1m32.19821159s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-113143 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0223 05:07:50.400339   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-113143 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (2m16.098011397s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-113143 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	IMAGE                                     TAG                  IMAGE ID            SIZE
	docker.io/kindest/kindnetd                v20220726-ed811e41   d921cee849482       25.8MB
	gcr.io/k8s-minikube/storage-provisioner   v5                   6e38f40d628db       9.06MB
	k8s.gcr.io/coredns/coredns                v1.8.6               a4ca41631cc7a       13.6MB
	k8s.gcr.io/etcd                           3.5.3-0              aebe758cef4cd       102MB
	k8s.gcr.io/kube-apiserver                 v1.24.4              6cab9d1bed1be       33.8MB
	k8s.gcr.io/kube-controller-manager        v1.24.4              1f99cb6da9a82       31MB
	k8s.gcr.io/kube-proxy                     v1.24.4              7a53d1e08ef58       39.5MB
	k8s.gcr.io/kube-scheduler                 v1.24.4              03fa22539fc1c       15.5MB
	k8s.gcr.io/pause                          3.7                  221177c6082a8       311kB

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-02-23 05:09:33.487929216 +0000 UTC m=+2795.139514356
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-113143 -n test-preload-113143
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-113143 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-113143 logs -n 25: (1.111646148s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-945787 ssh -n                                                                 | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
	|         | multinode-945787-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-945787 ssh -n multinode-945787 sudo cat                                       | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
	|         | /home/docker/cp-test_multinode-945787-m03_multinode-945787.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-945787 cp multinode-945787-m03:/home/docker/cp-test.txt                       | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
	|         | multinode-945787-m02:/home/docker/cp-test_multinode-945787-m03_multinode-945787-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-945787 ssh -n                                                                 | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
	|         | multinode-945787-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-945787 ssh -n multinode-945787-m02 sudo cat                                   | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
	|         | /home/docker/cp-test_multinode-945787-m03_multinode-945787-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-945787 node stop m03                                                          | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:45 UTC |
	| node    | multinode-945787 node start                                                             | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:45 UTC | 23 Feb 23 04:46 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-945787                                                                | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:46 UTC |                     |
	| stop    | -p multinode-945787                                                                     | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:46 UTC | 23 Feb 23 04:49 UTC |
	| start   | -p multinode-945787                                                                     | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:49 UTC | 23 Feb 23 04:54 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-945787                                                                | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:54 UTC |                     |
	| node    | multinode-945787 node delete                                                            | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:54 UTC | 23 Feb 23 04:54 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-945787 stop                                                                   | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:54 UTC | 23 Feb 23 04:57 UTC |
	| start   | -p multinode-945787                                                                     | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 04:57 UTC | 23 Feb 23 05:02 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-945787                                                                | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 05:02 UTC |                     |
	| start   | -p multinode-945787-m02                                                                 | multinode-945787-m02 | jenkins | v1.29.0 | 23 Feb 23 05:02 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-945787-m03                                                                 | multinode-945787-m03 | jenkins | v1.29.0 | 23 Feb 23 05:02 UTC | 23 Feb 23 05:03 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-945787                                                                 | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 05:03 UTC |                     |
	| delete  | -p multinode-945787-m03                                                                 | multinode-945787-m03 | jenkins | v1.29.0 | 23 Feb 23 05:03 UTC | 23 Feb 23 05:03 UTC |
	| delete  | -p multinode-945787                                                                     | multinode-945787     | jenkins | v1.29.0 | 23 Feb 23 05:03 UTC | 23 Feb 23 05:03 UTC |
	| start   | -p test-preload-113143                                                                  | test-preload-113143  | jenkins | v1.29.0 | 23 Feb 23 05:03 UTC | 23 Feb 23 05:05 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-113143                                                                  | test-preload-113143  | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:05 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-113143                                                                  | test-preload-113143  | jenkins | v1.29.0 | 23 Feb 23 05:05 UTC | 23 Feb 23 05:07 UTC |
	| start   | -p test-preload-113143                                                                  | test-preload-113143  | jenkins | v1.29.0 | 23 Feb 23 05:07 UTC | 23 Feb 23 05:09 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| ssh     | -p test-preload-113143 -- sudo                                                          | test-preload-113143  | jenkins | v1.29.0 | 23 Feb 23 05:09 UTC | 23 Feb 23 05:09 UTC |
	|         | crictl image ls                                                                         |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 05:07:17
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 05:07:17.199394   25649 out.go:296] Setting OutFile to fd 1 ...
	I0223 05:07:17.199549   25649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 05:07:17.199556   25649 out.go:309] Setting ErrFile to fd 2...
	I0223 05:07:17.199561   25649 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 05:07:17.199659   25649 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
	I0223 05:07:17.200171   25649 out.go:303] Setting JSON to false
	I0223 05:07:17.200968   25649 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":2981,"bootTime":1677125856,"procs":191,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 05:07:17.201025   25649 start.go:135] virtualization: kvm guest
	I0223 05:07:17.204770   25649 out.go:177] * [test-preload-113143] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 05:07:17.206833   25649 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 05:07:17.206781   25649 notify.go:220] Checking for updates...
	I0223 05:07:17.208771   25649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 05:07:17.210659   25649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	I0223 05:07:17.212490   25649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	I0223 05:07:17.214302   25649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 05:07:17.216099   25649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 05:07:17.218130   25649 config.go:182] Loaded profile config "test-preload-113143": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0223 05:07:17.218490   25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 05:07:17.218559   25649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 05:07:17.232539   25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42045
	I0223 05:07:17.232909   25649 main.go:141] libmachine: () Calling .GetVersion
	I0223 05:07:17.233570   25649 main.go:141] libmachine: Using API Version  1
	I0223 05:07:17.233596   25649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 05:07:17.233965   25649 main.go:141] libmachine: () Calling .GetMachineName
	I0223 05:07:17.234192   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:17.236476   25649 out.go:177] * Kubernetes 1.26.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.1
	I0223 05:07:17.237947   25649 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 05:07:17.238316   25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 05:07:17.238354   25649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 05:07:17.251983   25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0223 05:07:17.252375   25649 main.go:141] libmachine: () Calling .GetVersion
	I0223 05:07:17.252791   25649 main.go:141] libmachine: Using API Version  1
	I0223 05:07:17.252812   25649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 05:07:17.253117   25649 main.go:141] libmachine: () Calling .GetMachineName
	I0223 05:07:17.253314   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:17.287623   25649 out.go:177] * Using the kvm2 driver based on existing profile
	I0223 05:07:17.289266   25649 start.go:296] selected driver: kvm2
	I0223 05:07:17.289281   25649 start.go:857] validating driver "kvm2" against &{Name:test-preload-113143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-113143 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/mini
kube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 05:07:17.289391   25649 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 05:07:17.290133   25649 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 05:07:17.290199   25649 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-3857/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0223 05:07:17.303744   25649 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0223 05:07:17.304036   25649 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0223 05:07:17.304074   25649 cni.go:84] Creating CNI manager for ""
	I0223 05:07:17.304085   25649 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0223 05:07:17.304098   25649 start_flags.go:319] config:
	{Name:test-preload-113143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-113143 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 05:07:17.304208   25649 iso.go:125] acquiring lock: {Name:mk5ab603b94a1c1bcf9332974dc395e96678ad02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 05:07:17.306352   25649 out.go:177] * Starting control plane node test-preload-113143 in cluster test-preload-113143
	I0223 05:07:17.307999   25649 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0223 05:07:17.464405   25649 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0223 05:07:17.464448   25649 cache.go:57] Caching tarball of preloaded images
	I0223 05:07:17.464636   25649 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0223 05:07:17.466821   25649 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0223 05:07:17.468476   25649 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0223 05:07:17.621987   25649 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0223 05:07:40.595263   25649 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0223 05:07:40.595367   25649 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0223 05:07:41.455984   25649 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on containerd
	I0223 05:07:41.456125   25649 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/config.json ...
	I0223 05:07:41.456330   25649 cache.go:193] Successfully downloaded all kic artifacts
	I0223 05:07:41.456360   25649 start.go:364] acquiring machines lock for test-preload-113143: {Name:mke4f23d5c0e3b1877e0c2e0b8619868f067380e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0223 05:07:41.456413   25649 start.go:368] acquired machines lock for "test-preload-113143" in 37.228µs
	I0223 05:07:41.456428   25649 start.go:96] Skipping create...Using existing machine configuration
	I0223 05:07:41.456435   25649 fix.go:55] fixHost starting: 
	I0223 05:07:41.456739   25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 05:07:41.456774   25649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 05:07:41.471020   25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46861
	I0223 05:07:41.471511   25649 main.go:141] libmachine: () Calling .GetVersion
	I0223 05:07:41.472139   25649 main.go:141] libmachine: Using API Version  1
	I0223 05:07:41.472162   25649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 05:07:41.472538   25649 main.go:141] libmachine: () Calling .GetMachineName
	I0223 05:07:41.472766   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:41.472947   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetState
	I0223 05:07:41.474757   25649 fix.go:103] recreateIfNeeded on test-preload-113143: state=Stopped err=<nil>
	I0223 05:07:41.474788   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	W0223 05:07:41.474942   25649 fix.go:129] unexpected machine state, will restart: <nil>
	I0223 05:07:41.477589   25649 out.go:177] * Restarting existing kvm2 VM for "test-preload-113143" ...
	I0223 05:07:41.479402   25649 main.go:141] libmachine: (test-preload-113143) Calling .Start
	I0223 05:07:41.479614   25649 main.go:141] libmachine: (test-preload-113143) Ensuring networks are active...
	I0223 05:07:41.480404   25649 main.go:141] libmachine: (test-preload-113143) Ensuring network default is active
	I0223 05:07:41.480929   25649 main.go:141] libmachine: (test-preload-113143) Ensuring network mk-test-preload-113143 is active
	I0223 05:07:41.481371   25649 main.go:141] libmachine: (test-preload-113143) Getting domain xml...
	I0223 05:07:41.482092   25649 main.go:141] libmachine: (test-preload-113143) Creating domain...
	I0223 05:07:42.718470   25649 main.go:141] libmachine: (test-preload-113143) Waiting to get IP...
	I0223 05:07:42.719286   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:42.719790   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:42.719898   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:42.719802   25685 retry.go:31] will retry after 242.200393ms: waiting for machine to come up
	I0223 05:07:42.963258   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:42.963708   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:42.963731   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:42.963656   25685 retry.go:31] will retry after 245.679752ms: waiting for machine to come up
	I0223 05:07:43.211198   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:43.211673   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:43.211701   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:43.211642   25685 retry.go:31] will retry after 312.378164ms: waiting for machine to come up
	I0223 05:07:43.525218   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:43.525735   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:43.525766   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:43.525678   25685 retry.go:31] will retry after 371.12386ms: waiting for machine to come up
	I0223 05:07:43.898112   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:43.898567   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:43.898593   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:43.898516   25685 retry.go:31] will retry after 472.035541ms: waiting for machine to come up
	I0223 05:07:44.372140   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:44.372567   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:44.372584   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:44.372505   25685 retry.go:31] will retry after 867.802289ms: waiting for machine to come up
	I0223 05:07:45.241677   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:45.242106   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:45.242138   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:45.242037   25685 retry.go:31] will retry after 1.053402506s: waiting for machine to come up
	I0223 05:07:46.297149   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:46.297595   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:46.297627   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:46.297528   25685 retry.go:31] will retry after 1.268095409s: waiting for machine to come up
	I0223 05:07:47.567342   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:47.567757   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:47.567787   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:47.567706   25685 retry.go:31] will retry after 1.549144571s: waiting for machine to come up
	I0223 05:07:49.118344   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:49.118788   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:49.118823   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:49.118727   25685 retry.go:31] will retry after 1.399464384s: waiting for machine to come up
	I0223 05:07:50.520326   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:50.520769   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:50.520798   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:50.520715   25685 retry.go:31] will retry after 1.965483635s: waiting for machine to come up
	I0223 05:07:52.487224   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:52.487674   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:52.487694   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:52.487618   25685 retry.go:31] will retry after 2.653586815s: waiting for machine to come up
	I0223 05:07:55.144303   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:55.144681   25649 main.go:141] libmachine: (test-preload-113143) DBG | unable to find current IP address of domain test-preload-113143 in network mk-test-preload-113143
	I0223 05:07:55.144705   25649 main.go:141] libmachine: (test-preload-113143) DBG | I0223 05:07:55.144631   25685 retry.go:31] will retry after 3.236103195s: waiting for machine to come up
	I0223 05:07:58.381962   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.382485   25649 main.go:141] libmachine: (test-preload-113143) Found IP for machine: 192.168.39.53
	I0223 05:07:58.382507   25649 main.go:141] libmachine: (test-preload-113143) Reserving static IP address...
	I0223 05:07:58.382517   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has current primary IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.382996   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "test-preload-113143", mac: "52:54:00:16:b0:47", ip: "192.168.39.53"} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:58.383019   25649 main.go:141] libmachine: (test-preload-113143) Reserved static IP address: 192.168.39.53
	I0223 05:07:58.383036   25649 main.go:141] libmachine: (test-preload-113143) DBG | skip adding static IP to network mk-test-preload-113143 - found existing host DHCP lease matching {name: "test-preload-113143", mac: "52:54:00:16:b0:47", ip: "192.168.39.53"}
	I0223 05:07:58.383051   25649 main.go:141] libmachine: (test-preload-113143) Waiting for SSH to be available...
	I0223 05:07:58.383087   25649 main.go:141] libmachine: (test-preload-113143) DBG | Getting to WaitForSSH function...
	I0223 05:07:58.385204   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.385496   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:58.385528   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.385609   25649 main.go:141] libmachine: (test-preload-113143) DBG | Using SSH client type: external
	I0223 05:07:58.385641   25649 main.go:141] libmachine: (test-preload-113143) DBG | Using SSH private key: /home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa (-rw-------)
	I0223 05:07:58.385670   25649 main.go:141] libmachine: (test-preload-113143) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.53 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0223 05:07:58.385686   25649 main.go:141] libmachine: (test-preload-113143) DBG | About to run SSH command:
	I0223 05:07:58.385699   25649 main.go:141] libmachine: (test-preload-113143) DBG | exit 0
	I0223 05:07:58.481029   25649 main.go:141] libmachine: (test-preload-113143) DBG | SSH cmd err, output: <nil>: 
	I0223 05:07:58.481410   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetConfigRaw
	I0223 05:07:58.482045   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetIP
	I0223 05:07:58.484716   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.485082   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:58.485118   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.485307   25649 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/config.json ...
	I0223 05:07:58.485495   25649 machine.go:88] provisioning docker machine ...
	I0223 05:07:58.485513   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:58.485728   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetMachineName
	I0223 05:07:58.485903   25649 buildroot.go:166] provisioning hostname "test-preload-113143"
	I0223 05:07:58.485935   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetMachineName
	I0223 05:07:58.486085   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:07:58.488073   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.488445   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:58.488475   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.488585   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:07:58.488740   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:58.488877   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:58.489047   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:07:58.489256   25649 main.go:141] libmachine: Using SSH client type: native
	I0223 05:07:58.489742   25649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0223 05:07:58.489756   25649 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-113143 && echo "test-preload-113143" | sudo tee /etc/hostname
	I0223 05:07:58.633892   25649 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-113143
	
	I0223 05:07:58.633930   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:07:58.636812   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.637259   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:58.637291   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.637540   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:07:58.637751   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:58.637948   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:58.638197   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:07:58.638392   25649 main.go:141] libmachine: Using SSH client type: native
	I0223 05:07:58.638790   25649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0223 05:07:58.638810   25649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-113143' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-113143/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-113143' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0223 05:07:58.777962   25649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0223 05:07:58.777993   25649 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/15909-3857/.minikube CaCertPath:/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/15909-3857/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/15909-3857/.minikube}
	I0223 05:07:58.778013   25649 buildroot.go:174] setting up certificates
	I0223 05:07:58.778020   25649 provision.go:83] configureAuth start
	I0223 05:07:58.778029   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetMachineName
	I0223 05:07:58.778345   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetIP
	I0223 05:07:58.781234   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.781560   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:58.781590   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.781706   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:07:58.784153   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.784557   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:58.784574   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.784703   25649 provision.go:138] copyHostCerts
	I0223 05:07:58.784771   25649 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3857/.minikube/ca.pem, removing ...
	I0223 05:07:58.784781   25649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3857/.minikube/ca.pem
	I0223 05:07:58.784860   25649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/15909-3857/.minikube/ca.pem (1082 bytes)
	I0223 05:07:58.784962   25649 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3857/.minikube/cert.pem, removing ...
	I0223 05:07:58.784985   25649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3857/.minikube/cert.pem
	I0223 05:07:58.785022   25649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/15909-3857/.minikube/cert.pem (1123 bytes)
	I0223 05:07:58.785207   25649 exec_runner.go:144] found /home/jenkins/minikube-integration/15909-3857/.minikube/key.pem, removing ...
	I0223 05:07:58.785223   25649 exec_runner.go:207] rm: /home/jenkins/minikube-integration/15909-3857/.minikube/key.pem
	I0223 05:07:58.785279   25649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/15909-3857/.minikube/key.pem (1679 bytes)
	I0223 05:07:58.785363   25649 provision.go:112] generating server cert: /home/jenkins/minikube-integration/15909-3857/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca-key.pem org=jenkins.test-preload-113143 san=[192.168.39.53 192.168.39.53 localhost 127.0.0.1 minikube test-preload-113143]
	I0223 05:07:58.929059   25649 provision.go:172] copyRemoteCerts
	I0223 05:07:58.929113   25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0223 05:07:58.929135   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:07:58.932008   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.932363   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:58.932388   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:58.932586   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:07:58.932843   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:58.933029   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:07:58.933203   25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
	I0223 05:07:59.027161   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0223 05:07:59.051784   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0223 05:07:59.072983   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0223 05:07:59.094820   25649 provision.go:86] duration metric: configureAuth took 316.788731ms
	I0223 05:07:59.094847   25649 buildroot.go:189] setting minikube options for container-runtime
	I0223 05:07:59.095030   25649 config.go:182] Loaded profile config "test-preload-113143": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0223 05:07:59.095044   25649 machine.go:91] provisioned docker machine in 609.537637ms
	I0223 05:07:59.095050   25649 start.go:300] post-start starting for "test-preload-113143" (driver="kvm2")
	I0223 05:07:59.095058   25649 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0223 05:07:59.095091   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:59.095414   25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0223 05:07:59.095440   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:07:59.098119   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.098451   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:59.098481   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.098647   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:07:59.098798   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:59.098942   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:07:59.099070   25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
	I0223 05:07:59.195290   25649 ssh_runner.go:195] Run: cat /etc/os-release
	I0223 05:07:59.199526   25649 info.go:137] Remote host: Buildroot 2021.02.12
	I0223 05:07:59.199546   25649 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3857/.minikube/addons for local assets ...
	I0223 05:07:59.199610   25649 filesync.go:126] Scanning /home/jenkins/minikube-integration/15909-3857/.minikube/files for local assets ...
	I0223 05:07:59.199677   25649 filesync.go:149] local asset: /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/108972.pem -> 108972.pem in /etc/ssl/certs
	I0223 05:07:59.199755   25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0223 05:07:59.208817   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/108972.pem --> /etc/ssl/certs/108972.pem (1708 bytes)
	I0223 05:07:59.230027   25649 start.go:303] post-start completed in 134.962953ms
	I0223 05:07:59.230057   25649 fix.go:57] fixHost completed within 17.773619763s
	I0223 05:07:59.230082   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:07:59.232783   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.233222   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:59.233249   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.233501   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:07:59.233664   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:59.233812   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:59.233917   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:07:59.234089   25649 main.go:141] libmachine: Using SSH client type: native
	I0223 05:07:59.234589   25649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x17560a0] 0x1759120 <nil>  [] 0s} 192.168.39.53 22 <nil> <nil>}
	I0223 05:07:59.234604   25649 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0223 05:07:59.365976   25649 main.go:141] libmachine: SSH cmd err, output: <nil>: 1677128879.330753450
	
	I0223 05:07:59.366001   25649 fix.go:207] guest clock: 1677128879.330753450
	I0223 05:07:59.366011   25649 fix.go:220] Guest: 2023-02-23 05:07:59.33075345 +0000 UTC Remote: 2023-02-23 05:07:59.2300616 +0000 UTC m=+42.069074072 (delta=100.69185ms)
	I0223 05:07:59.366031   25649 fix.go:191] guest clock delta is within tolerance: 100.69185ms
	I0223 05:07:59.366036   25649 start.go:83] releasing machines lock for "test-preload-113143", held for 17.909612918s
	I0223 05:07:59.366054   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:59.366319   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetIP
	I0223 05:07:59.369119   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.369450   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:59.369478   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.369655   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:59.370120   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:59.370279   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:07:59.370389   25649 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0223 05:07:59.370428   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:07:59.370465   25649 ssh_runner.go:195] Run: cat /version.json
	I0223 05:07:59.370488   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:07:59.372856   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.373194   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:59.373222   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.373242   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.373360   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:07:59.373588   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:59.373691   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:07:59.373722   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:07:59.373757   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:07:59.373909   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:07:59.373986   25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
	I0223 05:07:59.374124   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:07:59.374253   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:07:59.374384   25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
	I0223 05:07:59.462006   25649 ssh_runner.go:195] Run: systemctl --version
	I0223 05:07:59.587244   25649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0223 05:07:59.593062   25649 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0223 05:07:59.593140   25649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0223 05:07:59.609722   25649 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0223 05:07:59.609744   25649 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0223 05:07:59.609845   25649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0223 05:08:03.640598   25649 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.030724865s)
	I0223 05:08:03.640731   25649 containerd.go:604] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0223 05:08:03.640780   25649 ssh_runner.go:195] Run: which lz4
	I0223 05:08:03.644860   25649 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0223 05:08:03.648854   25649 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0223 05:08:03.648888   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
	I0223 05:08:05.429827   25649 containerd.go:551] Took 1.784998 seconds to copy over tarball
	I0223 05:08:05.429913   25649 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0223 05:08:08.520478   25649 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.090542368s)
	I0223 05:08:08.520502   25649 containerd.go:558] Took 3.090646 seconds to extract the tarball
	I0223 05:08:08.520510   25649 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0223 05:08:08.560242   25649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 05:08:08.653618   25649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 05:08:08.670957   25649 start.go:485] detecting cgroup driver to use...
	I0223 05:08:08.671028   25649 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0223 05:08:11.328472   25649 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.657426272s)
	I0223 05:08:11.328526   25649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0223 05:08:11.341485   25649 docker.go:186] disabling cri-docker service (if available) ...
	I0223 05:08:11.341556   25649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0223 05:08:11.356921   25649 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0223 05:08:11.371823   25649 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0223 05:08:11.472386   25649 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0223 05:08:11.572476   25649 docker.go:202] disabling docker service ...
	I0223 05:08:11.572540   25649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0223 05:08:11.587829   25649 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0223 05:08:11.600726   25649 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0223 05:08:11.700527   25649 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0223 05:08:11.795882   25649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0223 05:08:11.809587   25649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0223 05:08:11.829239   25649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
	I0223 05:08:11.838813   25649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0223 05:08:11.848244   25649 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0223 05:08:11.848310   25649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0223 05:08:11.857628   25649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 05:08:11.866681   25649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0223 05:08:11.875817   25649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0223 05:08:11.884840   25649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0223 05:08:11.894438   25649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0223 05:08:11.903524   25649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0223 05:08:11.911821   25649 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0223 05:08:11.911887   25649 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0223 05:08:11.925604   25649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0223 05:08:11.934687   25649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0223 05:08:12.029355   25649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0223 05:08:12.051952   25649 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
	I0223 05:08:12.052031   25649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0223 05:08:12.059508   25649 retry.go:31] will retry after 1.231814604s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0223 05:08:13.292172   25649 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0223 05:08:13.297608   25649 start.go:553] Will wait 60s for crictl version
	I0223 05:08:13.297683   25649 ssh_runner.go:195] Run: which crictl
	I0223 05:08:13.301559   25649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0223 05:08:13.334139   25649 start.go:569] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.6.15
	RuntimeApiVersion:  v1alpha2
	I0223 05:08:13.334214   25649 ssh_runner.go:195] Run: containerd --version
	I0223 05:08:13.361722   25649 ssh_runner.go:195] Run: containerd --version
	I0223 05:08:13.391188   25649 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.15 ...
	I0223 05:08:13.393100   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetIP
	I0223 05:08:13.396316   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:08:13.396740   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:08:13.396769   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:08:13.397018   25649 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0223 05:08:13.401321   25649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 05:08:13.412909   25649 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0223 05:08:13.412989   25649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0223 05:08:13.442114   25649 containerd.go:608] all images are preloaded for containerd runtime.
	I0223 05:08:13.442137   25649 containerd.go:522] Images already preloaded, skipping extraction
	I0223 05:08:13.442192   25649 ssh_runner.go:195] Run: sudo crictl images --output json
	I0223 05:08:13.472062   25649 containerd.go:608] all images are preloaded for containerd runtime.
	I0223 05:08:13.472089   25649 cache_images.go:84] Images are preloaded, skipping loading
	I0223 05:08:13.472146   25649 ssh_runner.go:195] Run: sudo crictl info
	I0223 05:08:13.503198   25649 cni.go:84] Creating CNI manager for ""
	I0223 05:08:13.503218   25649 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0223 05:08:13.503233   25649 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0223 05:08:13.503250   25649 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.53 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-113143 NodeName:test-preload-113143 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0223 05:08:13.503346   25649 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-113143"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0223 05:08:13.503420   25649 kubeadm.go:968] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-113143 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-113143 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0223 05:08:13.503466   25649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0223 05:08:13.513124   25649 binaries.go:44] Found k8s binaries, skipping transfer
	I0223 05:08:13.513198   25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0223 05:08:13.521506   25649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (483 bytes)
	I0223 05:08:13.537467   25649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0223 05:08:13.553753   25649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0223 05:08:13.569923   25649 ssh_runner.go:195] Run: grep 192.168.39.53	control-plane.minikube.internal$ /etc/hosts
	I0223 05:08:13.573737   25649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.53	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0223 05:08:13.585191   25649 certs.go:56] Setting up /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143 for IP: 192.168.39.53
	I0223 05:08:13.585224   25649 certs.go:186] acquiring lock for shared ca certs: {Name:mk147ec0d78f2171aa54104168d81016e3102ce0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 05:08:13.585405   25649 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/15909-3857/.minikube/ca.key
	I0223 05:08:13.585460   25649 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/15909-3857/.minikube/proxy-client-ca.key
	I0223 05:08:13.585552   25649 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.key
	I0223 05:08:13.585623   25649 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/apiserver.key.52e6c991
	I0223 05:08:13.585679   25649 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/proxy-client.key
	I0223 05:08:13.585799   25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/10897.pem (1338 bytes)
	W0223 05:08:13.585848   25649 certs.go:397] ignoring /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/10897_empty.pem, impossibly tiny 0 bytes
	I0223 05:08:13.585863   25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca-key.pem (1679 bytes)
	I0223 05:08:13.585888   25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/ca.pem (1082 bytes)
	I0223 05:08:13.585911   25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/cert.pem (1123 bytes)
	I0223 05:08:13.585939   25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/certs/home/jenkins/minikube-integration/15909-3857/.minikube/certs/key.pem (1679 bytes)
	I0223 05:08:13.585977   25649 certs.go:401] found cert: /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/108972.pem (1708 bytes)
	I0223 05:08:13.586474   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0223 05:08:13.609249   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0223 05:08:13.631953   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0223 05:08:13.654262   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0223 05:08:13.676005   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0223 05:08:13.698579   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0223 05:08:13.720511   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0223 05:08:13.742388   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0223 05:08:13.764734   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0223 05:08:13.786568   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/certs/10897.pem --> /usr/share/ca-certificates/10897.pem (1338 bytes)
	I0223 05:08:13.808858   25649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/ssl/certs/108972.pem --> /usr/share/ca-certificates/108972.pem (1708 bytes)
	I0223 05:08:13.830415   25649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0223 05:08:13.846105   25649 ssh_runner.go:195] Run: openssl version
	I0223 05:08:13.851624   25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/108972.pem && ln -fs /usr/share/ca-certificates/108972.pem /etc/ssl/certs/108972.pem"
	I0223 05:08:13.862169   25649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/108972.pem
	I0223 05:08:13.866777   25649 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Feb 23 04:30 /usr/share/ca-certificates/108972.pem
	I0223 05:08:13.866834   25649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/108972.pem
	I0223 05:08:13.872260   25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/108972.pem /etc/ssl/certs/3ec20f2e.0"
	I0223 05:08:13.882611   25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0223 05:08:13.892746   25649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0223 05:08:13.897423   25649 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Feb 23 04:24 /usr/share/ca-certificates/minikubeCA.pem
	I0223 05:08:13.897478   25649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0223 05:08:13.903129   25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0223 05:08:13.913460   25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/10897.pem && ln -fs /usr/share/ca-certificates/10897.pem /etc/ssl/certs/10897.pem"
	I0223 05:08:13.923690   25649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/10897.pem
	I0223 05:08:13.928178   25649 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Feb 23 04:30 /usr/share/ca-certificates/10897.pem
	I0223 05:08:13.928231   25649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/10897.pem
	I0223 05:08:13.933797   25649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/10897.pem /etc/ssl/certs/51391683.0"
	I0223 05:08:13.943997   25649 kubeadm.go:401] StartCluster: {Name:test-preload-113143 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-113143 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 05:08:13.944113   25649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0223 05:08:13.944176   25649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0223 05:08:13.973285   25649 cri.go:87] found id: ""
	I0223 05:08:13.973361   25649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0223 05:08:13.983168   25649 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0223 05:08:13.983190   25649 kubeadm.go:633] restartCluster start
	I0223 05:08:13.983244   25649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0223 05:08:13.992978   25649 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:13.993473   25649 kubeconfig.go:135] verify returned: extract IP: "test-preload-113143" does not appear in /home/jenkins/minikube-integration/15909-3857/kubeconfig
	I0223 05:08:13.993600   25649 kubeconfig.go:146] "test-preload-113143" context is missing from /home/jenkins/minikube-integration/15909-3857/kubeconfig - will repair!
	I0223 05:08:13.994030   25649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3857/kubeconfig: {Name:mkddc8f3473e702a00229e22f9312b560d0d7a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 05:08:13.994960   25649 kapi.go:59] client config for test-preload-113143: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 05:08:13.996111   25649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0223 05:08:14.005368   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:14.005419   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:14.016483   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:14.517221   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:14.517310   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:14.530287   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:15.016883   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:15.016976   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:15.029362   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:15.516921   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:15.517009   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:15.529274   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:16.016755   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:16.016828   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:16.029247   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:16.517481   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:16.517579   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:16.529984   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:17.016556   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:17.016655   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:17.029274   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:17.517512   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:17.517611   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:17.529815   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:18.017453   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:18.017570   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:18.030745   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:18.517334   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:18.517437   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:18.530492   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:19.017047   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:19.017119   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:19.029048   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:19.516639   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:19.516717   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:19.528888   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:20.017004   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:20.017076   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:20.029538   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:20.517078   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:20.517182   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:20.529300   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:21.016879   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:21.016949   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:21.029021   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:21.517030   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:21.517148   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:21.529484   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:22.016972   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:22.017085   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:22.030420   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:22.517206   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:22.517283   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:22.530610   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:23.017232   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:23.017317   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:23.029494   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:23.517088   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:23.517192   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:23.529324   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:24.017036   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:24.017116   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:24.029048   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:24.029073   25649 api_server.go:165] Checking apiserver status ...
	I0223 05:08:24.029121   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0223 05:08:24.040476   25649 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0223 05:08:24.040517   25649 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0223 05:08:24.040525   25649 kubeadm.go:1120] stopping kube-system containers ...
	I0223 05:08:24.040537   25649 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0223 05:08:24.040582   25649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0223 05:08:24.071808   25649 cri.go:87] found id: ""
	I0223 05:08:24.071876   25649 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0223 05:08:24.088332   25649 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0223 05:08:24.096887   25649 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0223 05:08:24.096935   25649 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0223 05:08:24.105277   25649 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0223 05:08:24.105300   25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 05:08:24.211352   25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 05:08:25.132391   25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0223 05:08:25.476116   25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 05:08:25.549785   25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0223 05:08:25.644028   25649 api_server.go:51] waiting for apiserver process to appear ...
	I0223 05:08:25.644110   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 05:08:26.156509   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 05:08:26.656373   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 05:08:27.156841   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 05:08:27.168423   25649 api_server.go:71] duration metric: took 1.524405489s to wait for apiserver process to appear ...
	I0223 05:08:27.168455   25649 api_server.go:87] waiting for apiserver healthz status ...
	I0223 05:08:27.168468   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:32.169192   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0223 05:08:32.670214   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:37.671107   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0223 05:08:38.169742   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:43.169986   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0223 05:08:43.669634   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:47.251123   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": read tcp 192.168.39.1:55308->192.168.39.53:8443: read: connection reset by peer
	I0223 05:08:47.669523   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:47.670161   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:48.169917   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:48.170615   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:48.670314   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:48.670973   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:49.169433   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:49.169999   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:49.669569   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:49.670193   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:50.169433   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:50.169986   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:50.669588   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:50.670279   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:51.169429   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:51.170030   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:51.670242   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:51.670925   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:52.169438   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:52.169977   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:52.669695   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:52.670331   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:53.170027   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:53.170727   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:53.669326   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:53.669962   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:54.169768   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:54.170431   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:54.670108   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:54.670782   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:55.169349   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:55.170008   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:55.669573   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:55.670246   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:56.169781   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:56.170387   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:56.669437   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:56.670119   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:57.169798   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:57.170464   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:57.670175   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:57.670846   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:58.169434   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:58.170118   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:58.669689   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:58.670301   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:59.169967   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:59.170645   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:08:59.670303   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:08:59.670866   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:00.169447   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:00.170015   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:00.669609   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:00.670242   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:01.169844   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:01.170442   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:01.669666   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:01.670286   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:02.169968   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:02.170504   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:02.669327   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:02.669990   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:03.169523   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:03.170092   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:03.669673   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:03.670402   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:04.170162   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:04.170804   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:04.670373   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:04.670978   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:05.169540   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:05.170213   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:05.669969   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:05.670646   25649 api_server.go:268] stopped: https://192.168.39.53:8443/healthz: Get "https://192.168.39.53:8443/healthz": dial tcp 192.168.39.53:8443: connect: connection refused
	I0223 05:09:06.170307   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:09.053386   25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0223 05:09:09.053420   25649 api_server.go:102] status: https://192.168.39.53:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0223 05:09:09.169642   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:09.179246   25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 05:09:09.179281   25649 api_server.go:102] status: https://192.168.39.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 05:09:09.669828   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:09.679957   25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 05:09:09.679988   25649 api_server.go:102] status: https://192.168.39.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 05:09:10.169518   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:10.175682   25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0223 05:09:10.175706   25649 api_server.go:102] status: https://192.168.39.53:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0223 05:09:10.669296   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:10.675585   25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 200:
	ok
	I0223 05:09:10.683086   25649 api_server.go:140] control plane version: v1.24.4
	I0223 05:09:10.683109   25649 api_server.go:130] duration metric: took 43.514648081s to wait for apiserver health ...
	I0223 05:09:10.683119   25649 cni.go:84] Creating CNI manager for ""
	I0223 05:09:10.683125   25649 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0223 05:09:10.685580   25649 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0223 05:09:10.687507   25649 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0223 05:09:10.698779   25649 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0223 05:09:10.719301   25649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 05:09:10.729302   25649 system_pods.go:59] 7 kube-system pods found
	I0223 05:09:10.729336   25649 system_pods.go:61] "coredns-6d4b75cb6d-mmpvt" [3928e1dc-58bd-434f-bc29-8c20afb5e112] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0223 05:09:10.729342   25649 system_pods.go:61] "etcd-test-preload-113143" [65f0e6f1-4ff2-49bd-9f2f-58967808df14] Running
	I0223 05:09:10.729348   25649 system_pods.go:61] "kube-apiserver-test-preload-113143" [e28969a2-5979-483e-bd07-658187cffae5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0223 05:09:10.729354   25649 system_pods.go:61] "kube-controller-manager-test-preload-113143" [055f8ab8-0181-4121-8993-88d236e645c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0223 05:09:10.729366   25649 system_pods.go:61] "kube-proxy-bq8xz" [b957cd83-fc56-48cc-a924-775e7a3ad79f] Running
	I0223 05:09:10.729370   25649 system_pods.go:61] "kube-scheduler-test-preload-113143" [901702d4-f84c-4418-a3df-ea323600a55d] Running
	I0223 05:09:10.729375   25649 system_pods.go:61] "storage-provisioner" [a4976d12-2647-4fa6-8366-5d94a2155a2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0223 05:09:10.729380   25649 system_pods.go:74] duration metric: took 10.059269ms to wait for pod list to return data ...
	I0223 05:09:10.729386   25649 node_conditions.go:102] verifying NodePressure condition ...
	I0223 05:09:10.732752   25649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 05:09:10.732785   25649 node_conditions.go:123] node cpu capacity is 2
	I0223 05:09:10.732804   25649 node_conditions.go:105] duration metric: took 3.413596ms to run NodePressure ...
	I0223 05:09:10.732822   25649 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0223 05:09:10.949088   25649 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0223 05:09:10.953473   25649 kubeadm.go:784] kubelet initialised
	I0223 05:09:10.953498   25649 kubeadm.go:785] duration metric: took 4.383999ms waiting for restarted kubelet to initialise ...
	I0223 05:09:10.953506   25649 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 05:09:10.958494   25649 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:11.975358   25649 pod_ready.go:97] node "test-preload-113143" hosting pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:11.975388   25649 pod_ready.go:81] duration metric: took 1.016870166s waiting for pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace to be "Ready" ...
	E0223 05:09:11.975396   25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:11.975402   25649 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:11.980129   25649 pod_ready.go:97] node "test-preload-113143" hosting pod "etcd-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:11.980146   25649 pod_ready.go:81] duration metric: took 4.738654ms waiting for pod "etcd-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	E0223 05:09:11.980153   25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "etcd-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:11.980159   25649 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:11.984045   25649 pod_ready.go:97] node "test-preload-113143" hosting pod "kube-apiserver-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:11.984063   25649 pod_ready.go:81] duration metric: took 3.898484ms waiting for pod "kube-apiserver-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	E0223 05:09:11.984071   25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "kube-apiserver-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:11.984076   25649 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:11.988262   25649 pod_ready.go:97] node "test-preload-113143" hosting pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:11.988280   25649 pod_ready.go:81] duration metric: took 4.198948ms waiting for pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	E0223 05:09:11.988287   25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:11.988292   25649 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bq8xz" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:12.323428   25649 pod_ready.go:97] node "test-preload-113143" hosting pod "kube-proxy-bq8xz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:12.323456   25649 pod_ready.go:81] duration metric: took 335.157819ms waiting for pod "kube-proxy-bq8xz" in "kube-system" namespace to be "Ready" ...
	E0223 05:09:12.323466   25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "kube-proxy-bq8xz" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:12.323475   25649 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:12.723593   25649 pod_ready.go:97] node "test-preload-113143" hosting pod "kube-scheduler-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:12.723619   25649 pod_ready.go:81] duration metric: took 400.136639ms waiting for pod "kube-scheduler-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	E0223 05:09:12.723630   25649 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-113143" hosting pod "kube-scheduler-test-preload-113143" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:12.723639   25649 pod_ready.go:38] duration metric: took 1.770125437s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 05:09:12.723657   25649 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0223 05:09:12.735111   25649 ops.go:34] apiserver oom_adj: -16
	I0223 05:09:12.735135   25649 kubeadm.go:637] restartCluster took 58.7519382s
	I0223 05:09:12.735144   25649 kubeadm.go:403] StartCluster complete in 58.791151978s
	I0223 05:09:12.735164   25649 settings.go:142] acquiring lock: {Name:mka9282d684f4d0ba7e9349607973a3a5eb0818b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 05:09:12.735244   25649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/15909-3857/kubeconfig
	I0223 05:09:12.735870   25649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/15909-3857/kubeconfig: {Name:mkddc8f3473e702a00229e22f9312b560d0d7a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0223 05:09:12.736109   25649 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0223 05:09:12.736198   25649 addons.go:489] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0223 05:09:12.736282   25649 addons.go:65] Setting storage-provisioner=true in profile "test-preload-113143"
	I0223 05:09:12.736296   25649 addons.go:227] Setting addon storage-provisioner=true in "test-preload-113143"
	W0223 05:09:12.736304   25649 addons.go:236] addon storage-provisioner should already be in state true
	I0223 05:09:12.736361   25649 host.go:66] Checking if "test-preload-113143" exists ...
	I0223 05:09:12.736354   25649 addons.go:65] Setting default-storageclass=true in profile "test-preload-113143"
	I0223 05:09:12.736400   25649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-113143"
	I0223 05:09:12.736406   25649 config.go:182] Loaded profile config "test-preload-113143": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0223 05:09:12.736712   25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 05:09:12.736671   25649 kapi.go:59] client config for test-preload-113143: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 05:09:12.736757   25649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 05:09:12.736871   25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 05:09:12.736930   25649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 05:09:12.740166   25649 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-113143" context rescaled to 1 replicas
	I0223 05:09:12.740200   25649 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.53 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0223 05:09:12.743792   25649 out.go:177] * Verifying Kubernetes components...
	I0223 05:09:12.745627   25649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 05:09:12.752188   25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43813
	I0223 05:09:12.752607   25649 main.go:141] libmachine: () Calling .GetVersion
	I0223 05:09:12.753170   25649 main.go:141] libmachine: Using API Version  1
	I0223 05:09:12.753194   25649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 05:09:12.753534   25649 main.go:141] libmachine: () Calling .GetMachineName
	I0223 05:09:12.753727   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetState
	I0223 05:09:12.755629   25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41807
	I0223 05:09:12.756000   25649 main.go:141] libmachine: () Calling .GetVersion
	I0223 05:09:12.756458   25649 main.go:141] libmachine: Using API Version  1
	I0223 05:09:12.756486   25649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 05:09:12.756453   25649 kapi.go:59] client config for test-preload-113143: &rest.Config{Host:"https://192.168.39.53:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.crt", KeyFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/profiles/test-preload-113143/client.key", CAFile:"/home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil)
, NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x299afe0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0223 05:09:12.756839   25649 main.go:141] libmachine: () Calling .GetMachineName
	I0223 05:09:12.757429   25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 05:09:12.757474   25649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 05:09:12.769453   25649 addons.go:227] Setting addon default-storageclass=true in "test-preload-113143"
	W0223 05:09:12.769472   25649 addons.go:236] addon default-storageclass should already be in state true
	I0223 05:09:12.769496   25649 host.go:66] Checking if "test-preload-113143" exists ...
	I0223 05:09:12.769860   25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 05:09:12.769915   25649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 05:09:12.771927   25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42863
	I0223 05:09:12.772317   25649 main.go:141] libmachine: () Calling .GetVersion
	I0223 05:09:12.772862   25649 main.go:141] libmachine: Using API Version  1
	I0223 05:09:12.772890   25649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 05:09:12.773219   25649 main.go:141] libmachine: () Calling .GetMachineName
	I0223 05:09:12.773424   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetState
	I0223 05:09:12.774876   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:09:12.777199   25649 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0223 05:09:12.778819   25649 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 05:09:12.778838   25649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0223 05:09:12.778856   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:09:12.782177   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:09:12.782718   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:09:12.782743   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:09:12.782994   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:09:12.783179   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:09:12.783344   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:09:12.783502   25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
	I0223 05:09:12.786602   25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34453
	I0223 05:09:12.786945   25649 main.go:141] libmachine: () Calling .GetVersion
	I0223 05:09:12.787431   25649 main.go:141] libmachine: Using API Version  1
	I0223 05:09:12.787454   25649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 05:09:12.787841   25649 main.go:141] libmachine: () Calling .GetMachineName
	I0223 05:09:12.788283   25649 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 05:09:12.788319   25649 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 05:09:12.802826   25649 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40907
	I0223 05:09:12.803246   25649 main.go:141] libmachine: () Calling .GetVersion
	I0223 05:09:12.803820   25649 main.go:141] libmachine: Using API Version  1
	I0223 05:09:12.803844   25649 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 05:09:12.804187   25649 main.go:141] libmachine: () Calling .GetMachineName
	I0223 05:09:12.804390   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetState
	I0223 05:09:12.806151   25649 main.go:141] libmachine: (test-preload-113143) Calling .DriverName
	I0223 05:09:12.806497   25649 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I0223 05:09:12.806515   25649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0223 05:09:12.806536   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHHostname
	I0223 05:09:12.809692   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:09:12.809961   25649 main.go:141] libmachine: (test-preload-113143) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:16:b0:47", ip: ""} in network mk-test-preload-113143: {Iface:virbr1 ExpiryTime:2023-02-23 06:03:54 +0000 UTC Type:0 Mac:52:54:00:16:b0:47 Iaid: IPaddr:192.168.39.53 Prefix:24 Hostname:test-preload-113143 Clientid:01:52:54:00:16:b0:47}
	I0223 05:09:12.809991   25649 main.go:141] libmachine: (test-preload-113143) DBG | domain test-preload-113143 has defined IP address 192.168.39.53 and MAC address 52:54:00:16:b0:47 in network mk-test-preload-113143
	I0223 05:09:12.810244   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHPort
	I0223 05:09:12.810402   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHKeyPath
	I0223 05:09:12.810512   25649 main.go:141] libmachine: (test-preload-113143) Calling .GetSSHUsername
	I0223 05:09:12.810640   25649 sshutil.go:53] new ssh client: &{IP:192.168.39.53 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/test-preload-113143/id_rsa Username:docker}
	I0223 05:09:12.944134   25649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0223 05:09:12.963210   25649 node_ready.go:35] waiting up to 6m0s for node "test-preload-113143" to be "Ready" ...
	I0223 05:09:12.963225   25649 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0223 05:09:12.973424   25649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0223 05:09:13.762152   25649 main.go:141] libmachine: Making call to close driver server
	I0223 05:09:13.762183   25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
	I0223 05:09:13.762207   25649 main.go:141] libmachine: Making call to close driver server
	I0223 05:09:13.762227   25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
	I0223 05:09:13.762514   25649 main.go:141] libmachine: Successfully made call to close driver server
	I0223 05:09:13.762537   25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
	I0223 05:09:13.762547   25649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0223 05:09:13.762557   25649 main.go:141] libmachine: Making call to close driver server
	I0223 05:09:13.762561   25649 main.go:141] libmachine: Successfully made call to close driver server
	I0223 05:09:13.762566   25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
	I0223 05:09:13.762572   25649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0223 05:09:13.762514   25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
	I0223 05:09:13.762580   25649 main.go:141] libmachine: Making call to close driver server
	I0223 05:09:13.762621   25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
	I0223 05:09:13.762791   25649 main.go:141] libmachine: Successfully made call to close driver server
	I0223 05:09:13.762811   25649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0223 05:09:13.762795   25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
	I0223 05:09:13.762835   25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
	I0223 05:09:13.762873   25649 main.go:141] libmachine: Successfully made call to close driver server
	I0223 05:09:13.762890   25649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0223 05:09:13.762914   25649 main.go:141] libmachine: Making call to close driver server
	I0223 05:09:13.762927   25649 main.go:141] libmachine: (test-preload-113143) Calling .Close
	I0223 05:09:13.763214   25649 main.go:141] libmachine: (test-preload-113143) DBG | Closing plugin on server side
	I0223 05:09:13.763252   25649 main.go:141] libmachine: Successfully made call to close driver server
	I0223 05:09:13.763267   25649 main.go:141] libmachine: Making call to close connection to plugin binary
	I0223 05:09:13.765721   25649 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0223 05:09:13.767394   25649 addons.go:492] enable addons completed in 1.031203078s: enabled=[storage-provisioner default-storageclass]
	I0223 05:09:14.971052   25649 node_ready.go:58] node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:17.470330   25649 node_ready.go:58] node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:19.470714   25649 node_ready.go:58] node "test-preload-113143" has status "Ready":"False"
	I0223 05:09:21.970214   25649 node_ready.go:49] node "test-preload-113143" has status "Ready":"True"
	I0223 05:09:21.970238   25649 node_ready.go:38] duration metric: took 9.006994732s waiting for node "test-preload-113143" to be "Ready" ...
	I0223 05:09:21.970246   25649 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 05:09:21.977729   25649 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:23.988963   25649 pod_ready.go:102] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"False"
	I0223 05:09:25.989705   25649 pod_ready.go:102] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"False"
	I0223 05:09:27.991137   25649 pod_ready.go:102] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"False"
	I0223 05:09:29.995646   25649 pod_ready.go:102] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"False"
	I0223 05:09:31.989117   25649 pod_ready.go:92] pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace has status "Ready":"True"
	I0223 05:09:31.989145   25649 pod_ready.go:81] duration metric: took 10.011389818s waiting for pod "coredns-6d4b75cb6d-mmpvt" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:31.989183   25649 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:31.995226   25649 pod_ready.go:92] pod "etcd-test-preload-113143" in "kube-system" namespace has status "Ready":"True"
	I0223 05:09:31.995240   25649 pod_ready.go:81] duration metric: took 6.049576ms waiting for pod "etcd-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:31.995248   25649 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:32.004888   25649 pod_ready.go:92] pod "kube-apiserver-test-preload-113143" in "kube-system" namespace has status "Ready":"True"
	I0223 05:09:32.004906   25649 pod_ready.go:81] duration metric: took 9.652018ms waiting for pod "kube-apiserver-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:32.004916   25649 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:32.010469   25649 pod_ready.go:92] pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace has status "Ready":"True"
	I0223 05:09:32.010491   25649 pod_ready.go:81] duration metric: took 5.567242ms waiting for pod "kube-controller-manager-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:32.010502   25649 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bq8xz" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:32.014813   25649 pod_ready.go:92] pod "kube-proxy-bq8xz" in "kube-system" namespace has status "Ready":"True"
	I0223 05:09:32.014833   25649 pod_ready.go:81] duration metric: took 4.323391ms waiting for pod "kube-proxy-bq8xz" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:32.014843   25649 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:32.388101   25649 pod_ready.go:92] pod "kube-scheduler-test-preload-113143" in "kube-system" namespace has status "Ready":"True"
	I0223 05:09:32.388121   25649 pod_ready.go:81] duration metric: took 373.270122ms waiting for pod "kube-scheduler-test-preload-113143" in "kube-system" namespace to be "Ready" ...
	I0223 05:09:32.388131   25649 pod_ready.go:38] duration metric: took 10.417877146s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0223 05:09:32.388148   25649 api_server.go:51] waiting for apiserver process to appear ...
	I0223 05:09:32.388192   25649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 05:09:32.401795   25649 api_server.go:71] duration metric: took 19.66155846s to wait for apiserver process to appear ...
	I0223 05:09:32.401828   25649 api_server.go:87] waiting for apiserver healthz status ...
	I0223 05:09:32.401839   25649 api_server.go:252] Checking apiserver healthz at https://192.168.39.53:8443/healthz ...
	I0223 05:09:32.407789   25649 api_server.go:278] https://192.168.39.53:8443/healthz returned 200:
	ok
	I0223 05:09:32.408596   25649 api_server.go:140] control plane version: v1.24.4
	I0223 05:09:32.408612   25649 api_server.go:130] duration metric: took 6.777726ms to wait for apiserver health ...
	I0223 05:09:32.408621   25649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0223 05:09:32.591210   25649 system_pods.go:59] 7 kube-system pods found
	I0223 05:09:32.591235   25649 system_pods.go:61] "coredns-6d4b75cb6d-mmpvt" [3928e1dc-58bd-434f-bc29-8c20afb5e112] Running
	I0223 05:09:32.591240   25649 system_pods.go:61] "etcd-test-preload-113143" [65f0e6f1-4ff2-49bd-9f2f-58967808df14] Running
	I0223 05:09:32.591251   25649 system_pods.go:61] "kube-apiserver-test-preload-113143" [e28969a2-5979-483e-bd07-658187cffae5] Running
	I0223 05:09:32.591255   25649 system_pods.go:61] "kube-controller-manager-test-preload-113143" [055f8ab8-0181-4121-8993-88d236e645c4] Running
	I0223 05:09:32.591259   25649 system_pods.go:61] "kube-proxy-bq8xz" [b957cd83-fc56-48cc-a924-775e7a3ad79f] Running
	I0223 05:09:32.591263   25649 system_pods.go:61] "kube-scheduler-test-preload-113143" [901702d4-f84c-4418-a3df-ea323600a55d] Running
	I0223 05:09:32.591269   25649 system_pods.go:61] "storage-provisioner" [a4976d12-2647-4fa6-8366-5d94a2155a2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0223 05:09:32.591274   25649 system_pods.go:74] duration metric: took 182.648658ms to wait for pod list to return data ...
	I0223 05:09:32.591280   25649 default_sa.go:34] waiting for default service account to be created ...
	I0223 05:09:32.787703   25649 default_sa.go:45] found service account: "default"
	I0223 05:09:32.787725   25649 default_sa.go:55] duration metric: took 196.440351ms for default service account to be created ...
	I0223 05:09:32.787732   25649 system_pods.go:116] waiting for k8s-apps to be running ...
	I0223 05:09:32.990191   25649 system_pods.go:86] 7 kube-system pods found
	I0223 05:09:32.990226   25649 system_pods.go:89] "coredns-6d4b75cb6d-mmpvt" [3928e1dc-58bd-434f-bc29-8c20afb5e112] Running
	I0223 05:09:32.990234   25649 system_pods.go:89] "etcd-test-preload-113143" [65f0e6f1-4ff2-49bd-9f2f-58967808df14] Running
	I0223 05:09:32.990240   25649 system_pods.go:89] "kube-apiserver-test-preload-113143" [e28969a2-5979-483e-bd07-658187cffae5] Running
	I0223 05:09:32.990247   25649 system_pods.go:89] "kube-controller-manager-test-preload-113143" [055f8ab8-0181-4121-8993-88d236e645c4] Running
	I0223 05:09:32.990253   25649 system_pods.go:89] "kube-proxy-bq8xz" [b957cd83-fc56-48cc-a924-775e7a3ad79f] Running
	I0223 05:09:32.990259   25649 system_pods.go:89] "kube-scheduler-test-preload-113143" [901702d4-f84c-4418-a3df-ea323600a55d] Running
	I0223 05:09:32.990277   25649 system_pods.go:89] "storage-provisioner" [a4976d12-2647-4fa6-8366-5d94a2155a2f] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0223 05:09:32.990285   25649 system_pods.go:126] duration metric: took 202.549103ms to wait for k8s-apps to be running ...
	I0223 05:09:32.990294   25649 system_svc.go:44] waiting for kubelet service to be running ....
	I0223 05:09:32.990341   25649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 05:09:33.004902   25649 system_svc.go:56] duration metric: took 14.585277ms WaitForService to wait for kubelet.
	I0223 05:09:33.004933   25649 kubeadm.go:578] duration metric: took 20.264701087s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0223 05:09:33.004986   25649 node_conditions.go:102] verifying NodePressure condition ...
	I0223 05:09:33.187466   25649 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0223 05:09:33.187493   25649 node_conditions.go:123] node cpu capacity is 2
	I0223 05:09:33.187503   25649 node_conditions.go:105] duration metric: took 182.51139ms to run NodePressure ...
	I0223 05:09:33.187514   25649 start.go:228] waiting for startup goroutines ...
	I0223 05:09:33.187520   25649 start.go:233] waiting for cluster config update ...
	I0223 05:09:33.187529   25649 start.go:242] writing updated cluster config ...
	I0223 05:09:33.187784   25649 ssh_runner.go:195] Run: rm -f paused
	I0223 05:09:33.236840   25649 start.go:555] kubectl: 1.26.1, cluster: 1.24.4 (minor skew: 2)
	I0223 05:09:33.239413   25649 out.go:177] 
	W0223 05:09:33.241191   25649 out.go:239] ! /usr/local/bin/kubectl is version 1.26.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0223 05:09:33.242920   25649 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0223 05:09:33.244772   25649 out.go:177] * Done! kubectl is now configured to use "test-preload-113143" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	e9086c130faaa       a4ca41631cc7a       3 seconds ago        Running             coredns                   1                   272567bfc1eee
	c07d53a7b0d73       7a53d1e08ef58       9 seconds ago        Running             kube-proxy                1                   287a18f017a43
	2863db25cf066       1f99cb6da9a82       20 seconds ago       Running             kube-controller-manager   2                   627b3ddfcac38
	4be127efeda99       6cab9d1bed1be       28 seconds ago       Running             kube-apiserver            2                   2f2af25be8b93
	e67313b9c90e5       1f99cb6da9a82       42 seconds ago       Exited              kube-controller-manager   1                   627b3ddfcac38
	82f70c263d12e       aebe758cef4cd       52 seconds ago       Running             etcd                      1                   b6220acccd7ca
	263c6e12a3a71       03fa22539fc1c       53 seconds ago       Running             kube-scheduler            1                   73c4a3a4e580f
	26d6d8b7f66e2       6cab9d1bed1be       About a minute ago   Exited              kube-apiserver            1                   2f2af25be8b93
	
	* 
	* ==> containerd <==
	* -- Journal begins at Thu 2023-02-23 05:07:52 UTC, ends at Thu 2023-02-23 05:09:34 UTC. --
	Feb 23 05:09:23 test-preload-113143 containerd[628]: time="2023-02-23T05:09:23.755078933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 05:09:23 test-preload-113143 containerd[628]: time="2023-02-23T05:09:23.755255754Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 05:09:23 test-preload-113143 containerd[628]: time="2023-02-23T05:09:23.755267326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 05:09:23 test-preload-113143 containerd[628]: time="2023-02-23T05:09:23.755558286Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e pid=1441 runtime=io.containerd.runc.v2
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.091404305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:a4976d12-2647-4fa6-8366-5d94a2155a2f,Namespace:kube-system,Attempt:0,} returns sandbox id \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\""
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.100851276Z" level=info msg="CreateContainer within sandbox \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.112050594Z" level=error msg="CreateContainer within sandbox \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1485309488 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists"
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.723310673Z" level=info msg="CreateContainer within sandbox \"287a18f017a433c1fe40b39903b974b13f44b42c45101dc30e45325666af8e0b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.757095296Z" level=info msg="CreateContainer within sandbox \"287a18f017a433c1fe40b39903b974b13f44b42c45101dc30e45325666af8e0b\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"c07d53a7b0d7358ada22b20b8e047addfdd2a5d8ca0fe53c9350ce007c00fb6b\""
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.761461531Z" level=info msg="StartContainer for \"c07d53a7b0d7358ada22b20b8e047addfdd2a5d8ca0fe53c9350ce007c00fb6b\""
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.854428630Z" level=info msg="StartContainer for \"c07d53a7b0d7358ada22b20b8e047addfdd2a5d8ca0fe53c9350ce007c00fb6b\" returns successfully"
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.911519099Z" level=info msg="CreateContainer within sandbox \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Feb 23 05:09:24 test-preload-113143 containerd[628]: time="2023-02-23T05:09:24.938680061Z" level=error msg="CreateContainer within sandbox \"76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3011273267 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists"
	Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.723579450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-mmpvt,Uid:3928e1dc-58bd-434f-bc29-8c20afb5e112,Namespace:kube-system,Attempt:0,}"
	Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.826844335Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.826899452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.826908802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Feb 23 05:09:29 test-preload-113143 containerd[628]: time="2023-02-23T05:09:29.827317117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63 pid=1647 runtime=io.containerd.runc.v2
	Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.160067894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-mmpvt,Uid:3928e1dc-58bd-434f-bc29-8c20afb5e112,Namespace:kube-system,Attempt:0,} returns sandbox id \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\""
	Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.167193734Z" level=info msg="CreateContainer within sandbox \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.198776943Z" level=error msg="CreateContainer within sandbox \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\" for &ContainerMetadata{Name:coredns,Attempt:1,} failed" error="failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1946538601 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists"
	Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.923195445Z" level=info msg="CreateContainer within sandbox \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.970962111Z" level=info msg="CreateContainer within sandbox \"272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e9086c130faaa5794e1ea4eb2ac50d3af5376fea2c565b0e99be7b6e81bb608a\""
	Feb 23 05:09:30 test-preload-113143 containerd[628]: time="2023-02-23T05:09:30.972234600Z" level=info msg="StartContainer for \"e9086c130faaa5794e1ea4eb2ac50d3af5376fea2c565b0e99be7b6e81bb608a\""
	Feb 23 05:09:31 test-preload-113143 containerd[628]: time="2023-02-23T05:09:31.061541291Z" level=info msg="StartContainer for \"e9086c130faaa5794e1ea4eb2ac50d3af5376fea2c565b0e99be7b6e81bb608a\" returns successfully"
	
	* 
	* ==> coredns [e9086c130faaa5794e1ea4eb2ac50d3af5376fea2c565b0e99be7b6e81bb608a] <==
	* .:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:36477 - 25196 "HINFO IN 1800509997243044346.4330803314511940720. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.006843638s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-113143
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-113143
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=66d56dc3ac28a702789778ac47e90f12526a0321
	                    minikube.k8s.io/name=test-preload-113143
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_02_23T05_04_47_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 23 Feb 2023 05:04:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-113143
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 23 Feb 2023 05:09:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 23 Feb 2023 05:09:21 +0000   Thu, 23 Feb 2023 05:04:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 23 Feb 2023 05:09:21 +0000   Thu, 23 Feb 2023 05:04:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 23 Feb 2023 05:09:21 +0000   Thu, 23 Feb 2023 05:04:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 23 Feb 2023 05:09:21 +0000   Thu, 23 Feb 2023 05:09:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.53
	  Hostname:    test-preload-113143
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 5394e64b7b5e4495aba69ae1cd40df43
	  System UUID:                5394e64b-7b5e-4495-aba6-9ae1cd40df43
	  Boot ID:                    84658728-e2dc-4b20-b2b5-55b270763021
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.15
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-mmpvt                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m34s
	  kube-system                 etcd-test-preload-113143                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-apiserver-test-preload-113143             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kube-controller-manager-test-preload-113143    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 kube-proxy-bq8xz                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m34s
	  kube-system                 kube-scheduler-test-preload-113143             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9s                     kube-proxy       
	  Normal  Starting                 4m31s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m56s (x5 over 4m56s)  kubelet          Node test-preload-113143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s (x5 over 4m56s)  kubelet          Node test-preload-113143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s (x4 over 4m56s)  kubelet          Node test-preload-113143 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s                  kubelet          Node test-preload-113143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s                  kubelet          Node test-preload-113143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s                  kubelet          Node test-preload-113143 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m36s                  kubelet          Node test-preload-113143 status is now: NodeReady
	  Normal  RegisteredNode           4m35s                  node-controller  Node test-preload-113143 event: Registered Node test-preload-113143 in Controller
	  Normal  Starting                 69s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  69s (x8 over 69s)      kubelet          Node test-preload-113143 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    69s (x8 over 69s)      kubelet          Node test-preload-113143 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     69s (x7 over 69s)      kubelet          Node test-preload-113143 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  69s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9s                     node-controller  Node test-preload-113143 event: Registered Node test-preload-113143 in Controller
	
	* 
	* ==> dmesg <==
	* [Feb23 05:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.072142] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.962902] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.184322] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150664] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.469081] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Feb23 05:08] systemd-fstab-generator[527]: Ignoring "noauto" for root device
	[  +2.811503] systemd-fstab-generator[556]: Ignoring "noauto" for root device
	[  +0.102720] systemd-fstab-generator[567]: Ignoring "noauto" for root device
	[  +0.129947] systemd-fstab-generator[580]: Ignoring "noauto" for root device
	[  +0.095532] systemd-fstab-generator[591]: Ignoring "noauto" for root device
	[  +0.232107] systemd-fstab-generator[618]: Ignoring "noauto" for root device
	[ +13.433338] systemd-fstab-generator[814]: Ignoring "noauto" for root device
	[Feb23 05:09] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.997888] kauditd_printk_skb: 15 callbacks suppressed
	
	* 
	* ==> etcd [82f70c263d12e8535727b6f70dc9c136781cb5d47e181a272699b4d324d859b8] <==
	* {"level":"info","ts":"2023-02-23T05:08:42.289Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8389b8f6c4f004d4","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-02-23T05:08:42.289Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-02-23T05:08:42.291Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 switched to configuration voters=(9478310260783449300)"}
	{"level":"info","ts":"2023-02-23T05:08:42.291Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"1138cde6dcc1ce27","local-member-id":"8389b8f6c4f004d4","added-peer-id":"8389b8f6c4f004d4","added-peer-peer-urls":["https://192.168.39.53:2380"]}
	{"level":"info","ts":"2023-02-23T05:08:42.291Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"1138cde6dcc1ce27","local-member-id":"8389b8f6c4f004d4","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T05:08:42.291Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-02-23T05:08:42.293Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-02-23T05:08:42.294Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8389b8f6c4f004d4","initial-advertise-peer-urls":["https://192.168.39.53:2380"],"listen-peer-urls":["https://192.168.39.53:2380"],"advertise-client-urls":["https://192.168.39.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-02-23T05:08:42.294Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-02-23T05:08:42.295Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.53:2380"}
	{"level":"info","ts":"2023-02-23T05:08:42.295Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.53:2380"}
	{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 is starting a new election at term 2"}
	{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 received MsgPreVoteResp from 8389b8f6c4f004d4 at term 2"}
	{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became candidate at term 3"}
	{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 received MsgVoteResp from 8389b8f6c4f004d4 at term 3"}
	{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8389b8f6c4f004d4 became leader at term 3"}
	{"level":"info","ts":"2023-02-23T05:08:43.269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8389b8f6c4f004d4 elected leader 8389b8f6c4f004d4 at term 3"}
	{"level":"info","ts":"2023-02-23T05:08:43.270Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8389b8f6c4f004d4","local-member-attributes":"{Name:test-preload-113143 ClientURLs:[https://192.168.39.53:2379]}","request-path":"/0/members/8389b8f6c4f004d4/attributes","cluster-id":"1138cde6dcc1ce27","publish-timeout":"7s"}
	{"level":"info","ts":"2023-02-23T05:08:43.270Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T05:08:43.271Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-02-23T05:08:43.273Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-02-23T05:08:43.273Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.53:2379"}
	{"level":"info","ts":"2023-02-23T05:08:43.273Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-02-23T05:08:43.273Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  05:09:34 up 1 min,  0 users,  load average: 0.84, 0.25, 0.09
	Linux test-preload-113143 5.10.57 #1 SMP Thu Feb 16 22:09:52 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [26d6d8b7f66e2c3a37e7b3201a453b0ba3e5427b0490eae75d7645d3d5c0173a] <==
	* I0223 05:08:27.016247       1 server.go:558] external host was not specified, using 192.168.39.53
	I0223 05:08:27.017132       1 server.go:158] Version: v1.24.4
	I0223 05:08:27.017181       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 05:08:27.235513       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0223 05:08:27.236422       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0223 05:08:27.236435       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0223 05:08:27.237415       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0223 05:08:27.237426       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0223 05:08:27.240466       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:28.235811       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:28.241663       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:29.236633       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:29.601245       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:30.673856       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:31.737307       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:32.945880       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:36.146408       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:37.760454       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0223 05:08:42.229113       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0223 05:08:47.240471       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [4be127efeda9951812d50daf33349c99ea494ca4adc427fd492a0bca8b26b5c2] <==
	* I0223 05:09:09.021822       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0223 05:09:09.021841       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0223 05:09:09.021853       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0223 05:09:09.021872       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0223 05:09:09.021875       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I0223 05:09:09.022388       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0223 05:09:09.023024       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0223 05:09:09.114707       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0223 05:09:09.117626       1 cache.go:39] Caches are synced for autoregister controller
	I0223 05:09:09.117670       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0223 05:09:09.117687       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0223 05:09:09.118172       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0223 05:09:09.123397       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0223 05:09:09.145333       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0223 05:09:09.717045       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0223 05:09:10.025826       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0223 05:09:10.844552       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0223 05:09:10.861472       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0223 05:09:10.906497       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0223 05:09:10.925379       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0223 05:09:10.935430       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0223 05:09:11.862953       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0223 05:09:25.072880       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0223 05:09:25.584900       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0223 05:09:25.864663       1 controller.go:611] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [2863db25cf0664dcd0d086d52797107b2f30c8801252b161e47992f395dd65b7] <==
	* W0223 05:09:25.625582       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-113143. Assuming now as a timestamp.
	I0223 05:09:25.625617       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0223 05:09:25.626099       1 event.go:294] "Event occurred" object="test-preload-113143" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-113143 event: Registered Node test-preload-113143 in Controller"
	I0223 05:09:25.628887       1 shared_informer.go:262] Caches are synced for GC
	I0223 05:09:25.631089       1 shared_informer.go:262] Caches are synced for stateful set
	I0223 05:09:25.637084       1 shared_informer.go:262] Caches are synced for expand
	I0223 05:09:25.639520       1 shared_informer.go:262] Caches are synced for crt configmap
	I0223 05:09:25.644092       1 shared_informer.go:262] Caches are synced for TTL
	I0223 05:09:25.650235       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-serving
	I0223 05:09:25.652489       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0223 05:09:25.652670       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I0223 05:09:25.653931       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-legacy-unknown
	I0223 05:09:25.683957       1 shared_informer.go:262] Caches are synced for cronjob
	I0223 05:09:25.703441       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0223 05:09:25.721506       1 shared_informer.go:262] Caches are synced for disruption
	I0223 05:09:25.721520       1 disruption.go:371] Sending events to api server.
	I0223 05:09:25.727281       1 shared_informer.go:262] Caches are synced for deployment
	I0223 05:09:25.835255       1 shared_informer.go:262] Caches are synced for attach detach
	I0223 05:09:25.835724       1 shared_informer.go:262] Caches are synced for resource quota
	I0223 05:09:25.846506       1 shared_informer.go:262] Caches are synced for endpoint
	I0223 05:09:25.849829       1 shared_informer.go:262] Caches are synced for resource quota
	I0223 05:09:25.883403       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0223 05:09:26.263251       1 shared_informer.go:262] Caches are synced for garbage collector
	I0223 05:09:26.263299       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0223 05:09:26.273864       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-controller-manager [e67313b9c90e5d35bc1c2a085135b0289a2017c7223f431be4468d304173ee69] <==
	* 	vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x2f6
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run.func1()
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:165 +0x3c
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x3931a60?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x4d010e0, 0xc000e4d530}, 0x1, 0xc000446900)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0xdf8475800, 0x0, 0xa0?, 0xc00006efd0?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x4d2abb0?, 0xc0005a0a40?, 0xc0007ebda0?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:164 +0x372
	
	goroutine 141 [syscall]:
	syscall.Syscall6(0xe8, 0xd, 0xc000f2fc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
		/usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5
	k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0x0?, {0xc000f2fc14?, 0x0?, 0x0?}, 0x0?)
		vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:56 +0x58
	k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000dbb420)
		vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d
	k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0005b8a00)
		vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x26e
	created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
		vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c5
	
	* 
	* ==> kube-proxy [c07d53a7b0d7358ada22b20b8e047addfdd2a5d8ca0fe53c9350ce007c00fb6b] <==
	* I0223 05:09:25.012755       1 node.go:163] Successfully retrieved node IP: 192.168.39.53
	I0223 05:09:25.012832       1 server_others.go:138] "Detected node IP" address="192.168.39.53"
	I0223 05:09:25.012918       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0223 05:09:25.060338       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0223 05:09:25.060380       1 server_others.go:206] "Using iptables Proxier"
	I0223 05:09:25.061135       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0223 05:09:25.063287       1 server.go:661] "Version info" version="v1.24.4"
	I0223 05:09:25.063325       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0223 05:09:25.064651       1 config.go:317] "Starting service config controller"
	I0223 05:09:25.065679       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0223 05:09:25.065732       1 config.go:226] "Starting endpoint slice config controller"
	I0223 05:09:25.065738       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0223 05:09:25.069621       1 config.go:444] "Starting node config controller"
	I0223 05:09:25.069721       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0223 05:09:25.166063       1 shared_informer.go:262] Caches are synced for service config
	I0223 05:09:25.166081       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0223 05:09:25.170668       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [263c6e12a3a71fb6508ca375613a7bcb8e65a1da59d0a00449930d6b59deab8d] <==
	* W0223 05:09:04.407455       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.53:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	E0223 05:09:04.407500       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.53:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	W0223 05:09:04.541907       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.39.53:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	E0223 05:09:04.541945       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.53:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	W0223 05:09:04.554894       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.53:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	E0223 05:09:04.554933       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.53:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	W0223 05:09:05.650504       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.39.53:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	E0223 05:09:05.650536       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.53:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	W0223 05:09:05.872695       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: Get "https://192.168.39.53:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	E0223 05:09:05.872922       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.53:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	W0223 05:09:05.916585       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.39.53:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	E0223 05:09:05.916862       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.53:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.53:8443: connect: connection refused
	W0223 05:09:09.050756       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0223 05:09:09.050813       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0223 05:09:09.051222       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0223 05:09:09.051260       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0223 05:09:09.052159       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0223 05:09:09.052203       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0223 05:09:09.052465       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0223 05:09:09.052502       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0223 05:09:09.052720       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0223 05:09:09.052756       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0223 05:09:09.055114       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0223 05:09:09.055155       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0223 05:09:28.061866       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Thu 2023-02-23 05:07:52 UTC, ends at Thu 2023-02-23 05:09:34 UTC. --
	Feb 23 05:09:10 test-preload-113143 kubelet[820]: E0223 05:09:10.850838     820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-bq8xz" podUID=b957cd83-fc56-48cc-a924-775e7a3ad79f
	Feb 23 05:09:10 test-preload-113143 kubelet[820]: I0223 05:09:10.856159     820 kubelet_node_status.go:70] "Attempting to register node" node="test-preload-113143"
	Feb 23 05:09:11 test-preload-113143 kubelet[820]: E0223 05:09:11.342591     820 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 23 05:09:11 test-preload-113143 kubelet[820]: E0223 05:09:11.343233     820 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3928e1dc-58bd-434f-bc29-8c20afb5e112-config-volume podName:3928e1dc-58bd-434f-bc29-8c20afb5e112 nodeName:}" failed. No retries permitted until 2023-02-23 05:09:13.343152384 +0000 UTC m=+47.874964728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3928e1dc-58bd-434f-bc29-8c20afb5e112-config-volume") pod "coredns-6d4b75cb6d-mmpvt" (UID: "3928e1dc-58bd-434f-bc29-8c20afb5e112") : object "kube-system"/"coredns" not registered
	Feb 23 05:09:11 test-preload-113143 kubelet[820]: I0223 05:09:11.627920     820 kubelet_node_status.go:108] "Node was previously registered" node="test-preload-113143"
	Feb 23 05:09:11 test-preload-113143 kubelet[820]: I0223 05:09:11.628110     820 kubelet_node_status.go:73] "Successfully registered node" node="test-preload-113143"
	Feb 23 05:09:11 test-preload-113143 kubelet[820]: I0223 05:09:11.630387     820 setters.go:532] "Node became not ready" node="test-preload-113143" condition={Type:Ready Status:False LastHeartbeatTime:2023-02-23 05:09:11.630324888 +0000 UTC m=+46.162137226 LastTransitionTime:2023-02-23 05:09:11.630324888 +0000 UTC m=+46.162137226 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized}
	Feb 23 05:09:11 test-preload-113143 kubelet[820]: E0223 05:09:11.722375     820 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-mmpvt" podUID=3928e1dc-58bd-434f-bc29-8c20afb5e112
	Feb 23 05:09:13 test-preload-113143 kubelet[820]: E0223 05:09:13.358379     820 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Feb 23 05:09:13 test-preload-113143 kubelet[820]: E0223 05:09:13.358582     820 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/3928e1dc-58bd-434f-bc29-8c20afb5e112-config-volume podName:3928e1dc-58bd-434f-bc29-8c20afb5e112 nodeName:}" failed. No retries permitted until 2023-02-23 05:09:17.358503581 +0000 UTC m=+51.890315907 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3928e1dc-58bd-434f-bc29-8c20afb5e112-config-volume") pod "coredns-6d4b75cb6d-mmpvt" (UID: "3928e1dc-58bd-434f-bc29-8c20afb5e112") : object "kube-system"/"coredns" not registered
	Feb 23 05:09:13 test-preload-113143 kubelet[820]: E0223 05:09:13.724198     820 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6d4b75cb6d-mmpvt" podUID=3928e1dc-58bd-434f-bc29-8c20afb5e112
	Feb 23 05:09:13 test-preload-113143 kubelet[820]: I0223 05:09:13.799966     820 scope.go:110] "RemoveContainer" containerID="e67313b9c90e5d35bc1c2a085135b0289a2017c7223f431be4468d304173ee69"
	Feb 23 05:09:17 test-preload-113143 kubelet[820]: E0223 05:09:17.549730     820 remote_runtime.go:201] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1713846745 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists"
	Feb 23 05:09:17 test-preload-113143 kubelet[820]: E0223 05:09:17.550179     820 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1713846745 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists" pod="kube-system/coredns-6d4b75cb6d-mmpvt"
	Feb 23 05:09:17 test-preload-113143 kubelet[820]: E0223 05:09:17.550242     820 kuberuntime_manager.go:815] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1713846745 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists" pod="kube-system/coredns-6d4b75cb6d-mmpvt"
	Feb 23 05:09:17 test-preload-113143 kubelet[820]: E0223 05:09:17.550459     820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6d4b75cb6d-mmpvt_kube-system(3928e1dc-58bd-434f-bc29-8c20afb5e112)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6d4b75cb6d-mmpvt_kube-system(3928e1dc-58bd-434f-bc29-8c20afb5e112)\\\": rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1713846745 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/31: file exists\"" pod="kube-system/coredns-6d4b75cb6d-mmpvt" podUID=3928e1dc-58bd-434f-bc29-8c20afb5e112
	Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.112458     820 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1485309488 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists" podSandboxID="76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e"
	Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.112559     820 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qscbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(a4976d12-2647-4fa6-8366-
5d94a2155a2f): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1485309488 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists
	Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.112591     820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1485309488 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/32: file exists\"" pod="kube-system/storage-provisioner" podUID=a4976d12-2647-4fa6-8366-5d94a2155a2f
	Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.939311     820 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3011273267 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists" podSandboxID="76bf5927ccca7f35266d5463c2490334fdca58e58bda1da6de94f40fb597694e"
	Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.939481     820 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qscbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(a4976d12-2647-4fa6-8366-
5d94a2155a2f): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3011273267 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists
	Feb 23 05:09:24 test-preload-113143 kubelet[820]: E0223 05:09:24.939813     820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-3011273267 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/33: file exists\"" pod="kube-system/storage-provisioner" podUID=a4976d12-2647-4fa6-8366-5d94a2155a2f
	Feb 23 05:09:30 test-preload-113143 kubelet[820]: E0223 05:09:30.199204     820 remote_runtime.go:421] "CreateContainer in sandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1946538601 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists" podSandboxID="272567bfc1eee45f4f6d062106f0558f6451c59e7d118cb763acb13f76192c63"
	Feb 23 05:09:30 test-preload-113143 kubelet[820]: E0223 05:09:30.199381     820 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-d8mjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-mmpvt_kube-system(3928e1dc-58bd-434f-bc29-8c20afb5e112): CreateContainerError: failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1946538601 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists
	Feb 23 05:09:30 test-preload-113143 kubelet[820]: E0223 05:09:30.199421     820 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerError: \"failed to create containerd container: failed to rename: rename /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/new-1946538601 /mnt/vda1/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/34: file exists\"" pod="kube-system/coredns-6d4b75cb6d-mmpvt" podUID=3928e1dc-58bd-434f-bc29-8c20afb5e112
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-113143 -n test-preload-113143
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-113143 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-113143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-113143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-113143: (1.183086398s)
--- FAIL: TestPreload (357.55s)

                                                
                                    

Test pass (257/292)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 33.93
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.06
10 TestDownloadOnly/v1.26.1/json-events 21.48
11 TestDownloadOnly/v1.26.1/preload-exists 0
15 TestDownloadOnly/v1.26.1/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.38
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.36
19 TestBinaryMirror 0.63
20 TestOffline 118.45
22 TestAddons/Setup 148.67
24 TestAddons/parallel/Registry 21.38
25 TestAddons/parallel/Ingress 23.75
26 TestAddons/parallel/MetricsServer 6.31
27 TestAddons/parallel/HelmTiller 14.21
29 TestAddons/parallel/CSI 63.23
30 TestAddons/parallel/Headlamp 16.95
31 TestAddons/parallel/CloudSpanner 5.55
34 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/StoppedEnableDisable 91.97
36 TestCertOptions 95.13
37 TestCertExpiration 244.9
39 TestForceSystemdFlag 83.9
40 TestForceSystemdEnv 82.36
41 TestKVMDriverInstallOrUpdate 15.7
45 TestErrorSpam/setup 53.54
46 TestErrorSpam/start 0.36
47 TestErrorSpam/status 0.74
48 TestErrorSpam/pause 1.42
49 TestErrorSpam/unpause 1.5
50 TestErrorSpam/stop 2.56
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 69.38
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 6.88
57 TestFunctional/serial/KubeContext 0.04
58 TestFunctional/serial/KubectlGetPods 0.1
61 TestFunctional/serial/CacheCmd/cache/add_remote 14.43
62 TestFunctional/serial/CacheCmd/cache/add_local 2.88
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
66 TestFunctional/serial/CacheCmd/cache/cache_reload 3.81
67 TestFunctional/serial/CacheCmd/cache/delete 0.1
68 TestFunctional/serial/MinikubeKubectlCmd 0.11
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
70 TestFunctional/serial/ExtraConfig 43.36
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.39
73 TestFunctional/serial/LogsFileCmd 1.4
75 TestFunctional/parallel/ConfigCmd 0.37
76 TestFunctional/parallel/DashboardCmd 15.12
77 TestFunctional/parallel/DryRun 0.28
78 TestFunctional/parallel/InternationalLanguage 0.17
79 TestFunctional/parallel/StatusCmd 0.87
83 TestFunctional/parallel/ServiceCmdConnect 12.51
84 TestFunctional/parallel/AddonsCmd 0.19
85 TestFunctional/parallel/PersistentVolumeClaim 46.87
87 TestFunctional/parallel/SSHCmd 0.5
88 TestFunctional/parallel/CpCmd 0.99
89 TestFunctional/parallel/MySQL 36.31
90 TestFunctional/parallel/FileSync 0.23
91 TestFunctional/parallel/CertSync 1.33
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.8
99 TestFunctional/parallel/License 0.32
108 TestFunctional/parallel/ProfileCmd/profile_not_create 0.37
109 TestFunctional/parallel/ProfileCmd/profile_list 0.31
110 TestFunctional/parallel/ProfileCmd/profile_json_output 0.32
111 TestFunctional/parallel/MountCmd/any-port 10.74
112 TestFunctional/parallel/ServiceCmd/ServiceJSONOutput 0.31
113 TestFunctional/parallel/MountCmd/specific-port 2.2
114 TestFunctional/parallel/Version/short 0.05
115 TestFunctional/parallel/Version/components 0.59
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.41
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.45
120 TestFunctional/parallel/ImageCommands/ImageBuild 5.54
121 TestFunctional/parallel/ImageCommands/Setup 1.78
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.09
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.78
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.85
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 6.49
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.38
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.9
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.5
132 TestFunctional/delete_addon-resizer_images 0.16
133 TestFunctional/delete_my-image_image 0.07
134 TestFunctional/delete_minikube_cached_images 0.06
138 TestIngressAddonLegacy/StartLegacyK8sCluster 101.01
140 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 18.31
141 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.39
142 TestIngressAddonLegacy/serial/ValidateIngressAddons 36.88
145 TestJSONOutput/start/Command 78.41
146 TestJSONOutput/start/Audit 0
148 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/pause/Command 0.63
152 TestJSONOutput/pause/Audit 0
154 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/unpause/Command 0.61
158 TestJSONOutput/unpause/Audit 0
160 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/stop/Command 7.09
164 TestJSONOutput/stop/Audit 0
166 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
168 TestErrorJSONOutput 0.45
173 TestMainNoArgs 0.05
174 TestMinikubeProfile 111.92
177 TestMountStart/serial/StartWithMountFirst 29.55
178 TestMountStart/serial/VerifyMountFirst 0.39
179 TestMountStart/serial/StartWithMountSecond 28.42
180 TestMountStart/serial/VerifyMountSecond 0.4
181 TestMountStart/serial/DeleteFirst 0.92
182 TestMountStart/serial/VerifyMountPostDelete 0.4
183 TestMountStart/serial/Stop 1.16
184 TestMountStart/serial/RestartStopped 23.79
185 TestMountStart/serial/VerifyMountPostStop 0.4
188 TestMultiNode/serial/FreshStart2Nodes 167.06
189 TestMultiNode/serial/DeployApp2Nodes 6.81
190 TestMultiNode/serial/PingHostFrom2Pods 0.87
191 TestMultiNode/serial/AddNode 68.08
192 TestMultiNode/serial/ProfileList 0.26
193 TestMultiNode/serial/CopyFile 7.46
194 TestMultiNode/serial/StopNode 2.16
195 TestMultiNode/serial/StartAfterStop 64.2
196 TestMultiNode/serial/RestartKeepsNodes 484.76
197 TestMultiNode/serial/DeleteNode 2
198 TestMultiNode/serial/StopMultiNode 183.8
199 TestMultiNode/serial/RestartMultiNode 299.11
200 TestMultiNode/serial/ValidateNameConflict 60.52
207 TestScheduledStopUnix 125.34
211 TestRunningBinaryUpgrade 272.41
213 TestKubernetesUpgrade 197.84
216 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
217 TestNoKubernetes/serial/StartWithK8s 106.69
218 TestNoKubernetes/serial/StartWithStopK8s 41.22
219 TestStoppedBinaryUpgrade/Setup 2.9
220 TestStoppedBinaryUpgrade/Upgrade 217.59
221 TestNoKubernetes/serial/Start 31.01
222 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
223 TestNoKubernetes/serial/ProfileList 0.88
224 TestNoKubernetes/serial/Stop 1.67
225 TestNoKubernetes/serial/StartNoArgs 66.74
226 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
234 TestStoppedBinaryUpgrade/MinikubeLogs 0.92
242 TestNetworkPlugins/group/false 3.87
247 TestPause/serial/Start 120.65
249 TestStartStop/group/old-k8s-version/serial/FirstStart 164.69
251 TestStartStop/group/no-preload/serial/FirstStart 160.13
252 TestPause/serial/SecondStartNoReconfiguration 5.72
253 TestPause/serial/Pause 0.66
254 TestPause/serial/VerifyStatus 0.24
255 TestPause/serial/Unpause 0.58
256 TestPause/serial/PauseAgain 0.76
257 TestPause/serial/DeletePaused 1.11
258 TestPause/serial/VerifyDeletedResources 6.05
260 TestStartStop/group/embed-certs/serial/FirstStart 71.1
262 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.59
263 TestStartStop/group/no-preload/serial/DeployApp 10.51
264 TestStartStop/group/old-k8s-version/serial/DeployApp 9.59
265 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.12
266 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.26
267 TestStartStop/group/old-k8s-version/serial/Stop 92.27
268 TestStartStop/group/no-preload/serial/Stop 92.64
269 TestStartStop/group/embed-certs/serial/DeployApp 9.43
270 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.84
271 TestStartStop/group/embed-certs/serial/Stop 91.8
272 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.38
273 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.78
274 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.86
275 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
276 TestStartStop/group/old-k8s-version/serial/SecondStart 372.42
277 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
278 TestStartStop/group/no-preload/serial/SecondStart 323.88
279 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
280 TestStartStop/group/embed-certs/serial/SecondStart 664.22
281 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
282 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 384.82
283 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
284 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
285 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
286 TestStartStop/group/no-preload/serial/Pause 2.64
288 TestStartStop/group/newest-cni/serial/FirstStart 65.2
289 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
290 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
291 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
292 TestStartStop/group/old-k8s-version/serial/Pause 2.64
293 TestNetworkPlugins/group/auto/Start 113.26
294 TestStartStop/group/newest-cni/serial/DeployApp 0
295 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
296 TestStartStop/group/newest-cni/serial/Stop 15.29
297 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
298 TestStartStop/group/newest-cni/serial/SecondStart 90.87
299 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.26
300 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
301 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
302 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.5
303 TestNetworkPlugins/group/kindnet/Start 75.57
304 TestNetworkPlugins/group/auto/KubeletFlags 0.31
305 TestNetworkPlugins/group/auto/NetCatPod 12.29
306 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
307 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
308 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
309 TestStartStop/group/newest-cni/serial/Pause 2.51
310 TestNetworkPlugins/group/auto/DNS 0.19
311 TestNetworkPlugins/group/auto/Localhost 0.18
312 TestNetworkPlugins/group/auto/HairPin 0.16
313 TestNetworkPlugins/group/calico/Start 99.54
314 TestNetworkPlugins/group/custom-flannel/Start 106.37
315 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
316 TestNetworkPlugins/group/kindnet/KubeletFlags 0.25
317 TestNetworkPlugins/group/kindnet/NetCatPod 13.5
318 TestNetworkPlugins/group/kindnet/DNS 0.19
319 TestNetworkPlugins/group/kindnet/Localhost 0.17
320 TestNetworkPlugins/group/kindnet/HairPin 0.16
321 TestNetworkPlugins/group/enable-default-cni/Start 121.44
322 TestNetworkPlugins/group/calico/ControllerPod 5.03
323 TestNetworkPlugins/group/calico/KubeletFlags 0.24
324 TestNetworkPlugins/group/calico/NetCatPod 11.39
325 TestNetworkPlugins/group/calico/DNS 0.18
326 TestNetworkPlugins/group/calico/Localhost 0.15
327 TestNetworkPlugins/group/calico/HairPin 0.13
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.39
330 TestNetworkPlugins/group/custom-flannel/DNS 0.19
331 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
332 TestNetworkPlugins/group/custom-flannel/HairPin 0.17
333 TestNetworkPlugins/group/flannel/Start 92.69
334 TestNetworkPlugins/group/bridge/Start 87.61
335 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
336 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
337 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
338 TestStartStop/group/embed-certs/serial/Pause 2.91
339 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
340 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.41
341 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
342 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
343 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
344 TestNetworkPlugins/group/flannel/ControllerPod 5.02
345 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
346 TestNetworkPlugins/group/flannel/NetCatPod 9.41
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
348 TestNetworkPlugins/group/bridge/NetCatPod 11.37
349 TestNetworkPlugins/group/flannel/DNS 0.16
350 TestNetworkPlugins/group/flannel/Localhost 0.12
351 TestNetworkPlugins/group/flannel/HairPin 0.13
352 TestNetworkPlugins/group/bridge/DNS 0.16
353 TestNetworkPlugins/group/bridge/Localhost 0.17
354 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.16.0/json-events (33.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-060444 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-060444 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (33.927525464s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (33.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-060444
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-060444: exit status 85 (63.769989ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-060444 | jenkins | v1.29.0 | 23 Feb 23 04:22 UTC |          |
	|         | -p download-only-060444        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 04:22:58
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 04:22:58.431025   10909 out.go:296] Setting OutFile to fd 1 ...
	I0223 04:22:58.431250   10909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:22:58.431258   10909 out.go:309] Setting ErrFile to fd 2...
	I0223 04:22:58.431263   10909 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:22:58.431363   10909 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
	W0223 04:22:58.431469   10909 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-3857/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-3857/.minikube/config/config.json: no such file or directory
	I0223 04:22:58.432012   10909 out.go:303] Setting JSON to true
	I0223 04:22:58.432791   10909 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":323,"bootTime":1677125856,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 04:22:58.432852   10909 start.go:135] virtualization: kvm guest
	I0223 04:22:58.435992   10909 out.go:97] [download-only-060444] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	W0223 04:22:58.436123   10909 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball: no such file or directory
	I0223 04:22:58.438049   10909 out.go:169] MINIKUBE_LOCATION=15909
	I0223 04:22:58.436192   10909 notify.go:220] Checking for updates...
	I0223 04:22:58.441449   10909 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 04:22:58.443403   10909 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	I0223 04:22:58.445341   10909 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	I0223 04:22:58.447105   10909 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0223 04:22:58.450415   10909 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 04:22:58.450615   10909 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 04:22:58.565915   10909 out.go:97] Using the kvm2 driver based on user configuration
	I0223 04:22:58.565947   10909 start.go:296] selected driver: kvm2
	I0223 04:22:58.565955   10909 start.go:857] validating driver "kvm2" against <nil>
	I0223 04:22:58.566243   10909 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 04:22:58.566366   10909 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-3857/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0223 04:22:58.580908   10909 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0223 04:22:58.580971   10909 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0223 04:22:58.581453   10909 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0223 04:22:58.581614   10909 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0223 04:22:58.581647   10909 cni.go:84] Creating CNI manager for ""
	I0223 04:22:58.581664   10909 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0223 04:22:58.581671   10909 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0223 04:22:58.581698   10909 start_flags.go:319] config:
	{Name:download-only-060444 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-060444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 04:22:58.581903   10909 iso.go:125] acquiring lock: {Name:mk5ab603b94a1c1bcf9332974dc395e96678ad02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 04:22:58.584146   10909 out.go:97] Downloading VM boot image ...
	I0223 04:22:58.584187   10909 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso.sha256 -> /home/jenkins/minikube-integration/15909-3857/.minikube/cache/iso/amd64/minikube-v1.29.0-1676568791-15849-amd64.iso
	I0223 04:23:09.974492   10909 out.go:97] Starting control plane node download-only-060444 in cluster download-only-060444
	I0223 04:23:09.974517   10909 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0223 04:23:10.129439   10909 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0223 04:23:10.129482   10909 cache.go:57] Caching tarball of preloaded images
	I0223 04:23:10.129658   10909 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0223 04:23:10.131909   10909 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0223 04:23:10.131936   10909 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0223 04:23:10.286589   10909 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0223 04:23:28.118986   10909 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0223 04:23:28.119105   10909 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-060444"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/json-events (21.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-060444 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-060444 --force --alsologtostderr --kubernetes-version=v1.26.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (21.478862617s)
--- PASS: TestDownloadOnly/v1.26.1/json-events (21.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/preload-exists
--- PASS: TestDownloadOnly/v1.26.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-060444
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-060444: exit status 85 (66.362978ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-060444 | jenkins | v1.29.0 | 23 Feb 23 04:22 UTC |          |
	|         | -p download-only-060444        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-060444 | jenkins | v1.29.0 | 23 Feb 23 04:23 UTC |          |
	|         | -p download-only-060444        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.1   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/02/23 04:23:32
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.20.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0223 04:23:32.422124   10945 out.go:296] Setting OutFile to fd 1 ...
	I0223 04:23:32.422557   10945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:23:32.422576   10945 out.go:309] Setting ErrFile to fd 2...
	I0223 04:23:32.422584   10945 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:23:32.422804   10945 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
	W0223 04:23:32.423031   10945 root.go:312] Error reading config file at /home/jenkins/minikube-integration/15909-3857/.minikube/config/config.json: open /home/jenkins/minikube-integration/15909-3857/.minikube/config/config.json: no such file or directory
	I0223 04:23:32.423637   10945 out.go:303] Setting JSON to true
	I0223 04:23:32.424588   10945 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":357,"bootTime":1677125856,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 04:23:32.424656   10945 start.go:135] virtualization: kvm guest
	I0223 04:23:32.427079   10945 out.go:97] [download-only-060444] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 04:23:32.428824   10945 out.go:169] MINIKUBE_LOCATION=15909
	I0223 04:23:32.427220   10945 notify.go:220] Checking for updates...
	I0223 04:23:32.432203   10945 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 04:23:32.433970   10945 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	I0223 04:23:32.435638   10945 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	I0223 04:23:32.437454   10945 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0223 04:23:32.441523   10945 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0223 04:23:32.441956   10945 config.go:182] Loaded profile config "download-only-060444": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0223 04:23:32.442027   10945 start.go:765] api.Load failed for download-only-060444: filestore "download-only-060444": Docker machine "download-only-060444" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 04:23:32.442089   10945 driver.go:365] Setting default libvirt URI to qemu:///system
	W0223 04:23:32.442136   10945 start.go:765] api.Load failed for download-only-060444: filestore "download-only-060444": Docker machine "download-only-060444" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0223 04:23:32.474689   10945 out.go:97] Using the kvm2 driver based on existing profile
	I0223 04:23:32.474717   10945 start.go:296] selected driver: kvm2
	I0223 04:23:32.474724   10945 start.go:857] validating driver "kvm2" against &{Name:download-only-060444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-060444 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 04:23:32.475118   10945 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 04:23:32.475196   10945 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/15909-3857/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0223 04:23:32.489319   10945 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0223 04:23:32.489963   10945 cni.go:84] Creating CNI manager for ""
	I0223 04:23:32.490011   10945 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0223 04:23:32.490029   10945 start_flags.go:319] config:
	{Name:download-only-060444 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.1 ClusterName:download-only-060444 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 04:23:32.490163   10945 iso.go:125] acquiring lock: {Name:mk5ab603b94a1c1bcf9332974dc395e96678ad02 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0223 04:23:32.492397   10945 out.go:97] Starting control plane node download-only-060444 in cluster download-only-060444
	I0223 04:23:32.492417   10945 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime containerd
	I0223 04:23:33.132965   10945 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4
	I0223 04:23:33.133010   10945 cache.go:57] Caching tarball of preloaded images
	I0223 04:23:33.133207   10945 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime containerd
	I0223 04:23:33.135653   10945 out.go:97] Downloading Kubernetes v1.26.1 preload ...
	I0223 04:23:33.135679   10945 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4 ...
	I0223 04:23:33.294821   10945 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.1/preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:23c4ed2d6e5c604534a6dbbb48ec17ff -> /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4
	I0223 04:23:49.789110   10945 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4 ...
	I0223 04:23:49.789217   10945 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/15909-3857/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.1-containerd-overlay2-amd64.tar.lz4 ...
	I0223 04:23:50.665646   10945 cache.go:60] Finished verifying existence of preloaded tar for  v1.26.1 on containerd
	I0223 04:23:50.665769   10945 profile.go:148] Saving config to /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/download-only-060444/config.json ...
	I0223 04:23:50.665959   10945 preload.go:132] Checking if preload exists for k8s version v1.26.1 and runtime containerd
	I0223 04:23:50.666198   10945 download.go:107] Downloading: https://dl.k8s.io/release/v1.26.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.26.1/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/15909-3857/.minikube/cache/linux/amd64/v1.26.1/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-060444"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-060444
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.36s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-685794 --alsologtostderr --binary-mirror http://127.0.0.1:34925 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-685794" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-685794
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestOffline (118.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-961217 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-961217 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m57.287627715s)
helpers_test.go:175: Cleaning up "offline-containerd-961217" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-961217
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-961217: (1.166329308s)
--- PASS: TestOffline (118.45s)

                                                
                                    
x
+
TestAddons/Setup (148.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-049813 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-049813 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m28.666040789s)
--- PASS: TestAddons/Setup (148.67s)

                                                
                                    
x
+
TestAddons/parallel/Registry (21.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 12.737817ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-87grv" [8d73a2ea-a170-4fd3-aa3d-35ce809955f6] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.016452959s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7ltrv" [975e564d-22d8-4a25-a087-cd45b62731b0] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.011943092s
addons_test.go:305: (dbg) Run:  kubectl --context addons-049813 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-049813 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-049813 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (10.651932092s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 ip
2023/02/23 04:26:45 [DEBUG] GET http://192.168.39.218:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (21.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (23.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-049813 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:197: (dbg) Run:  kubectl --context addons-049813 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-049813 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bbe87656-e2ce-4b03-adfb-da45b33b3e85] Pending
helpers_test.go:344: "nginx" [bbe87656-e2ce-4b03-adfb-da45b33b3e85] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bbe87656-e2ce-4b03-adfb-da45b33b3e85] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.023231464s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-049813 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.218
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-049813 addons disable ingress-dns --alsologtostderr -v=1: (1.574383948s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-049813 addons disable ingress --alsologtostderr -v=1: (7.804504561s)
--- PASS: TestAddons/parallel/Ingress (23.75s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 14.23351ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-x2m4t" [5c59b86b-5f5a-4e2f-9e06-65e34cb6df5b] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.019031783s
addons_test.go:380: (dbg) Run:  kubectl --context addons-049813 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:397: (dbg) Done: out/minikube-linux-amd64 -p addons-049813 addons disable metrics-server --alsologtostderr -v=1: (1.199581674s)
--- PASS: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.21s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 12.698295ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-fgpch" [2f7e3432-9040-494a-ad46-d0d651ba7a5b] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.012734192s
addons_test.go:438: (dbg) Run:  kubectl --context addons-049813 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-049813 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.689445358s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.21s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.23s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 9.267068ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-049813 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-049813 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [52f1c17a-56b7-493d-b5fd-bb52b488af29] Pending
helpers_test.go:344: "task-pv-pod" [52f1c17a-56b7-493d-b5fd-bb52b488af29] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [52f1c17a-56b7-493d-b5fd-bb52b488af29] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.015620785s
addons_test.go:549: (dbg) Run:  kubectl --context addons-049813 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-049813 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-049813 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-049813 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-049813 delete pod task-pv-pod: (1.133214911s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-049813 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-049813 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-049813 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-049813 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [7e9eb628-06fe-4d94-8ade-58e64e6cfdc4] Pending
helpers_test.go:344: "task-pv-pod-restore" [7e9eb628-06fe-4d94-8ade-58e64e6cfdc4] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [7e9eb628-06fe-4d94-8ade-58e64e6cfdc4] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.018296834s
addons_test.go:591: (dbg) Run:  kubectl --context addons-049813 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-049813 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-049813 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-049813 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.567339858s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-049813 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.23s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-049813 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-049813 --alsologtostderr -v=1: (1.923089535s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5759877c79-bpjnm" [fb9f49d4-049b-4f03-aeec-58e00c052825] Pending
helpers_test.go:344: "headlamp-5759877c79-bpjnm" [fb9f49d4-049b-4f03-aeec-58e00c052825] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5759877c79-bpjnm" [fb9f49d4-049b-4f03-aeec-58e00c052825] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.026865899s
--- PASS: TestAddons/parallel/Headlamp (16.95s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-ddf7c59b4-q74n8" [3d5263e8-e04d-4283-ab04-bea5c6b8cf55] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.016527356s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-049813
--- PASS: TestAddons/parallel/CloudSpanner (5.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-049813 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-049813 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (91.97s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-049813
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-049813: (1m31.795935952s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-049813
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-049813
--- PASS: TestAddons/StoppedEnableDisable (91.97s)

                                                
                                    
x
+
TestCertOptions (95.13s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-603958 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
E0223 05:16:24.342200   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-603958 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m33.578722119s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-603958 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-603958 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-603958 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-603958" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-603958
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-603958: (1.054438481s)
--- PASS: TestCertOptions (95.13s)

                                                
                                    
x
+
TestCertExpiration (244.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-984816 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-984816 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (57.785266762s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-984816 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-984816 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (5.542888766s)
helpers_test.go:175: Cleaning up "cert-expiration-984816" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-984816
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-984816: (1.569789199s)
--- PASS: TestCertExpiration (244.90s)

                                                
                                    
x
+
TestForceSystemdFlag (83.9s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-731641 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-731641 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m22.583496806s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-731641 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-731641" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-731641
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-731641: (1.090520319s)
--- PASS: TestForceSystemdFlag (83.90s)

                                                
                                    
x
+
TestForceSystemdEnv (82.36s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-002231 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-002231 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m20.760587688s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-002231 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-002231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-002231
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-002231: (1.379038064s)
--- PASS: TestForceSystemdEnv (82.36s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (15.7s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (15.70s)

                                                
                                    
x
+
TestErrorSpam/setup (53.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-300735 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-300735 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-300735 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-300735 --driver=kvm2  --container-runtime=containerd: (53.536497833s)
--- PASS: TestErrorSpam/setup (53.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.36s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 start --dry-run
--- PASS: TestErrorSpam/start (0.36s)

                                                
                                    
x
+
TestErrorSpam/status (0.74s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 status
--- PASS: TestErrorSpam/status (0.74s)

                                                
                                    
x
+
TestErrorSpam/pause (1.42s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 pause
--- PASS: TestErrorSpam/pause (1.42s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.5s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 unpause
--- PASS: TestErrorSpam/unpause (1.50s)

                                                
                                    
x
+
TestErrorSpam/stop (2.56s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 stop: (2.411514535s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-300735 --log_dir /tmp/nospam-300735 stop
--- PASS: TestErrorSpam/stop (2.56s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1820: local sync path: /home/jenkins/minikube-integration/15909-3857/.minikube/files/etc/test/nested/copy/10897/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2199: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690311 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0223 04:31:24.343236   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:24.349068   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:24.359301   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:24.379624   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:24.419928   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:24.500236   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:24.660664   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:24.981240   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:25.622183   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:26.902672   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:29.463689   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:31:34.584851   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
functional_test.go:2199: (dbg) Done: out/minikube-linux-amd64 start -p functional-690311 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m9.379087248s)
--- PASS: TestFunctional/serial/StartWithProxy (69.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.88s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:653: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690311 --alsologtostderr -v=8
functional_test.go:653: (dbg) Done: out/minikube-linux-amd64 start -p functional-690311 --alsologtostderr -v=8: (6.876732331s)
functional_test.go:657: soft start took 6.877247985s for "functional-690311" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.88s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:675: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:690: (dbg) Run:  kubectl --context functional-690311 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (14.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 cache add k8s.gcr.io/pause:3.1
E0223 04:31:44.825837   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 cache add k8s.gcr.io/pause:3.1: (4.949268121s)
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 cache add k8s.gcr.io/pause:3.3
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 cache add k8s.gcr.io/pause:3.3: (4.633835023s)
functional_test.go:1043: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 cache add k8s.gcr.io/pause:latest
functional_test.go:1043: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 cache add k8s.gcr.io/pause:latest: (4.851310746s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (14.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.88s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1071: (dbg) Run:  docker build -t minikube-local-cache-test:functional-690311 /tmp/TestFunctionalserialCacheCmdcacheadd_local2434226629/001
functional_test.go:1083: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 cache add minikube-local-cache-test:functional-690311
functional_test.go:1083: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 cache add minikube-local-cache-test:functional-690311: (2.49942197s)
functional_test.go:1088: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 cache delete minikube-local-cache-test:functional-690311
functional_test.go:1077: (dbg) Run:  docker rmi minikube-local-cache-test:functional-690311
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.88s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1096: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1118: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (3.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1141: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1147: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (219.05367ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1152: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 cache reload
functional_test.go:1152: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 cache reload: (3.140565975s)
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (3.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1166: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:710: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 kubectl -- --context functional-690311 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:735: (dbg) Run:  out/kubectl --context functional-690311 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.36s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:751: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0223 04:32:05.306991   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:32:46.267270   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
functional_test.go:751: (dbg) Done: out/minikube-linux-amd64 start -p functional-690311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.356937842s)
functional_test.go:755: restart took 43.35702721s for "functional-690311" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.36s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:804: (dbg) Run:  kubectl --context functional-690311 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:819: etcd phase: Running
functional_test.go:829: etcd status: Ready
functional_test.go:819: kube-apiserver phase: Running
functional_test.go:829: kube-apiserver status: Ready
functional_test.go:819: kube-controller-manager phase: Running
functional_test.go:829: kube-controller-manager status: Ready
functional_test.go:819: kube-scheduler phase: Running
functional_test.go:829: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1230: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 logs
functional_test.go:1230: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 logs: (1.388497353s)
--- PASS: TestFunctional/serial/LogsCmd (1.39s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1244: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 logs --file /tmp/TestFunctionalserialLogsFileCmd2175134204/001/logs.txt
functional_test.go:1244: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 logs --file /tmp/TestFunctionalserialLogsFileCmd2175134204/001/logs.txt: (1.403404642s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.40s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 config get cpus: exit status 14 (63.321189ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 config set cpus 2
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 config get cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 config unset cpus
functional_test.go:1193: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 config get cpus
functional_test.go:1193: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 config get cpus: exit status 14 (65.961693ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:899: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-690311 --alsologtostderr -v=1]
functional_test.go:904: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-690311 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 16279: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.12s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:968: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690311 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:968: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-690311 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (148.480368ms)

                                                
                                                
-- stdout --
	* [functional-690311] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 04:33:03.425681   15787 out.go:296] Setting OutFile to fd 1 ...
	I0223 04:33:03.425806   15787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:33:03.425815   15787 out.go:309] Setting ErrFile to fd 2...
	I0223 04:33:03.425819   15787 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:33:03.426005   15787 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
	I0223 04:33:03.426679   15787 out.go:303] Setting JSON to false
	I0223 04:33:03.428046   15787 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":928,"bootTime":1677125856,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 04:33:03.428230   15787 start.go:135] virtualization: kvm guest
	I0223 04:33:03.431280   15787 out.go:177] * [functional-690311] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 04:33:03.433519   15787 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 04:33:03.435270   15787 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 04:33:03.433502   15787 notify.go:220] Checking for updates...
	I0223 04:33:03.438543   15787 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	I0223 04:33:03.440254   15787 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	I0223 04:33:03.441962   15787 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 04:33:03.443728   15787 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 04:33:03.445879   15787 config.go:182] Loaded profile config "functional-690311": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0223 04:33:03.446408   15787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:33:03.446469   15787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:33:03.462516   15787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42281
	I0223 04:33:03.462959   15787 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:33:03.463616   15787 main.go:141] libmachine: Using API Version  1
	I0223 04:33:03.463650   15787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:33:03.463949   15787 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:33:03.464129   15787 main.go:141] libmachine: (functional-690311) Calling .DriverName
	I0223 04:33:03.464325   15787 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 04:33:03.464872   15787 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:33:03.464915   15787 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:33:03.484066   15787 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40167
	I0223 04:33:03.484479   15787 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:33:03.484946   15787 main.go:141] libmachine: Using API Version  1
	I0223 04:33:03.484978   15787 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:33:03.485342   15787 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:33:03.485495   15787 main.go:141] libmachine: (functional-690311) Calling .DriverName
	I0223 04:33:03.519695   15787 out.go:177] * Using the kvm2 driver based on existing profile
	I0223 04:33:03.521421   15787 start.go:296] selected driver: kvm2
	I0223 04:33:03.521439   15787 start.go:857] validating driver "kvm2" against &{Name:functional-690311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.1 ClusterName:functional-690311 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 04:33:03.521557   15787 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 04:33:03.524108   15787 out.go:177] 
	W0223 04:33:03.525821   15787 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0223 04:33:03.527393   15787 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:985: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690311 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 start -p functional-690311 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1014: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-690311 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (166.476135ms)

                                                
                                                
-- stdout --
	* [functional-690311] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 04:33:03.713653   15864 out.go:296] Setting OutFile to fd 1 ...
	I0223 04:33:03.713779   15864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:33:03.713788   15864 out.go:309] Setting ErrFile to fd 2...
	I0223 04:33:03.713793   15864 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:33:03.713959   15864 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
	I0223 04:33:03.714483   15864 out.go:303] Setting JSON to false
	I0223 04:33:03.716041   15864 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":928,"bootTime":1677125856,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 04:33:03.716294   15864 start.go:135] virtualization: kvm guest
	I0223 04:33:03.720014   15864 out.go:177] * [functional-690311] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0223 04:33:03.721706   15864 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 04:33:03.721732   15864 notify.go:220] Checking for updates...
	I0223 04:33:03.723291   15864 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 04:33:03.724977   15864 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	I0223 04:33:03.726639   15864 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	I0223 04:33:03.728125   15864 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 04:33:03.729697   15864 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 04:33:03.731770   15864 config.go:182] Loaded profile config "functional-690311": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0223 04:33:03.732535   15864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:33:03.732633   15864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:33:03.750628   15864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0223 04:33:03.751025   15864 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:33:03.751699   15864 main.go:141] libmachine: Using API Version  1
	I0223 04:33:03.751732   15864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:33:03.752102   15864 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:33:03.752297   15864 main.go:141] libmachine: (functional-690311) Calling .DriverName
	I0223 04:33:03.752483   15864 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 04:33:03.752838   15864 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:33:03.752867   15864 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:33:03.768311   15864 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35899
	I0223 04:33:03.768771   15864 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:33:03.769303   15864 main.go:141] libmachine: Using API Version  1
	I0223 04:33:03.769324   15864 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:33:03.769641   15864 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:33:03.769827   15864 main.go:141] libmachine: (functional-690311) Calling .DriverName
	I0223 04:33:03.813979   15864 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0223 04:33:03.815419   15864 start.go:296] selected driver: kvm2
	I0223 04:33:03.815434   15864 start.go:857] validating driver "kvm2" against &{Name:functional-690311 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/15849/minikube-v1.29.0-1676568791-15849-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1676506612-15768@sha256:cc1cb283879fedae93096946a6953a50075ed680d467a47cbf669e0ed7d3aebc Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.1 ClusterName:functional-690311 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.180 Port:8441 KubernetesVersion:v1.26.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0223 04:33:03.815555   15864 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 04:33:03.820212   15864 out.go:177] 
	W0223 04:33:03.821975   15864 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0223 04:33:03.823802   15864 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:848: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 status
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:866: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1597: (dbg) Run:  kubectl --context functional-690311 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1603: (dbg) Run:  kubectl --context functional-690311 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-vzfbx" [9922955f-d2e6-4d84-8d3b-f96c9d92312c] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-vzfbx" [9922955f-d2e6-4d84-8d3b-f96c9d92312c] Running
functional_test.go:1608: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.014392233s
functional_test.go:1617: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 service hello-node-connect --url
functional_test.go:1623: found endpoint for hello-node-connect: http://192.168.39.180:32224
functional_test.go:1643: http://192.168.39.180:32224: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-vzfbx

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.180:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.180:32224
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.51s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1658: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 addons list
functional_test.go:1670: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (46.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [2bd0511f-35fb-4496-9f9e-7148a4527bb5] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.011379477s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-690311 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-690311 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-690311 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-690311 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-690311 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e8605eff-f463-4779-bf9b-7bb7d1597ae1] Pending
helpers_test.go:344: "sp-pod" [e8605eff-f463-4779-bf9b-7bb7d1597ae1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e8605eff-f463-4779-bf9b-7bb7d1597ae1] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.041185951s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-690311 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-690311 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-690311 delete -f testdata/storage-provisioner/pod.yaml: (2.010173593s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-690311 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [b1182cf2-e2d3-4360-824d-2071b072f5e7] Pending
helpers_test.go:344: "sp-pod" [b1182cf2-e2d3-4360-824d-2071b072f5e7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [b1182cf2-e2d3-4360-824d-2071b072f5e7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.013401824s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-690311 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (46.87s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1693: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "echo hello"
functional_test.go:1710: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh -n functional-690311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 cp functional-690311:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2199298884/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh -n functional-690311 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (36.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1758: (dbg) Run:  kubectl --context functional-690311 replace --force -f testdata/mysql.yaml
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-xdd4g" [489d8cf1-4a2e-4298-8f72-472323653b01] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-xdd4g" [489d8cf1-4a2e-4298-8f72-472323653b01] Running
functional_test.go:1764: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.009959056s
functional_test.go:1772: (dbg) Run:  kubectl --context functional-690311 exec mysql-888f84dd9-xdd4g -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-690311 exec mysql-888f84dd9-xdd4g -- mysql -ppassword -e "show databases;": exit status 1 (203.793976ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-690311 exec mysql-888f84dd9-xdd4g -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-690311 exec mysql-888f84dd9-xdd4g -- mysql -ppassword -e "show databases;": exit status 1 (181.683155ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-690311 exec mysql-888f84dd9-xdd4g -- mysql -ppassword -e "show databases;"
functional_test.go:1772: (dbg) Non-zero exit: kubectl --context functional-690311 exec mysql-888f84dd9-xdd4g -- mysql -ppassword -e "show databases;": exit status 1 (135.859368ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1772: (dbg) Run:  kubectl --context functional-690311 exec mysql-888f84dd9-xdd4g -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (36.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1894: Checking for existence of /etc/test/nested/copy/10897/hosts within VM
functional_test.go:1896: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo cat /etc/test/nested/copy/10897/hosts"
functional_test.go:1901: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1937: Checking for existence of /etc/ssl/certs/10897.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo cat /etc/ssl/certs/10897.pem"
functional_test.go:1937: Checking for existence of /usr/share/ca-certificates/10897.pem within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo cat /usr/share/ca-certificates/10897.pem"
functional_test.go:1937: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1938: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/108972.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo cat /etc/ssl/certs/108972.pem"
functional_test.go:1964: Checking for existence of /usr/share/ca-certificates/108972.pem within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo cat /usr/share/ca-certificates/108972.pem"
functional_test.go:1964: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1965: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:216: (dbg) Run:  kubectl --context functional-690311 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1992: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo systemctl is-active docker"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 ssh "sudo systemctl is-active docker": exit status 1 (358.144043ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:1992: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo systemctl is-active crio"
functional_test.go:1992: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 ssh "sudo systemctl is-active crio": exit status 1 (444.327041ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2253: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1267: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1272: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1307: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1312: Took "261.078343ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1321: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1326: Took "49.669827ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1358: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1363: Took "267.165249ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1371: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1376: Took "47.83093ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-690311 /tmp/TestFunctionalparallelMountCmdany-port2071321701/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1677126773043679133" to /tmp/TestFunctionalparallelMountCmdany-port2071321701/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1677126773043679133" to /tmp/TestFunctionalparallelMountCmdany-port2071321701/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1677126773043679133" to /tmp/TestFunctionalparallelMountCmdany-port2071321701/001/test-1677126773043679133
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (195.744937ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Feb 23 04:32 created-by-test
-rw-r--r-- 1 docker docker 24 Feb 23 04:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Feb 23 04:32 test-1677126773043679133
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh cat /mount-9p/test-1677126773043679133
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-690311 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [086fdb89-f5f1-4cb3-a8ae-f14c2e54c959] Pending
helpers_test.go:344: "busybox-mount" [086fdb89-f5f1-4cb3-a8ae-f14c2e54c959] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [086fdb89-f5f1-4cb3-a8ae-f14c2e54c959] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [086fdb89-f5f1-4cb3-a8ae-f14c2e54c959] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.016885373s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-690311 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690311 /tmp/TestFunctionalparallelMountCmdany-port2071321701/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/ServiceJSONOutput
functional_test.go:1547: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 service list -o json
functional_test.go:1552: Took "304.984609ms" to run "out/minikube-linux-amd64 -p functional-690311 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/ServiceJSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-690311 /tmp/TestFunctionalparallelMountCmdspecific-port709221179/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (269.663043ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690311 /tmp/TestFunctionalparallelMountCmdspecific-port709221179/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 ssh "sudo umount -f /mount-9p": exit status 1 (419.450763ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-690311 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-690311 /tmp/TestFunctionalparallelMountCmdspecific-port709221179/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2221: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2235: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls --format short
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-690311 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.26.1
registry.k8s.io/kube-proxy:v1.26.1
registry.k8s.io/kube-controller-manager:v1.26.1
registry.k8s.io/kube-apiserver:v1.26.1
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-690311
docker.io/library/nginx:latest
docker.io/library/minikube-local-cache-test:functional-690311
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls --format table
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-690311 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-690311  | sha256:ffd4cf | 10.8MB |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| registry.k8s.io/kube-controller-manager     | v1.26.1            | sha256:e9c08e | 32.2MB |
| registry.k8s.io/kube-proxy                  | v1.26.1            | sha256:46a6bb | 21.5MB |
| docker.io/library/nginx                     | latest             | sha256:3f8a00 | 56.9MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/kube-scheduler              | v1.26.1            | sha256:655493 | 17.5MB |
| docker.io/library/minikube-local-cache-test | functional-690311  | sha256:dc5c27 | 1.12kB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/etcd                        | 3.5.6-0            | sha256:fce326 | 103MB  |
| registry.k8s.io/kube-apiserver              | v1.26.1            | sha256:deb046 | 35.3MB |
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls --format json
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-690311 image ls --format json:
[{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd","repoDigests":["registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3"],"repoTags":["registry.k8s.io/kube-proxy:v1.26.1"],"size":"21536169"},{"id":"sha256:e6f1816883972d4be47bd48879a
08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"75788960"},{"id":"sha256:dc5c27d93d15f5ccf103e3116c6b9b03f83ce34e6807ee620e470eb1f9c0df8c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-690311"],"size":"1124"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-690311"],"size":"10823156"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],
"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f","repoDigests":["registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.1"],"size":"17486267"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3","repoDigests":["registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26"],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.1"],"size":"35320235"},{"id":"sha256:3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94","repoDigests":["docker.io/library/nginx@sha256:6650513efd1d27c1f8a5351cbd33edf85cc7e0d9d0fcb4ffb23d8fa89b601ba8"],"repoTags":["docker.io/library/nginx:latest"],"size":"568
97816"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":["registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c"],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"102542580"},{"id":"sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.1"],"size":"32245960"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59
fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:258: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls --format yaml
functional_test.go:263: (dbg) Stdout: out/minikube-linux-amd64 -p functional-690311 image ls --format yaml:
- id: sha256:07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "75788960"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:deb04688c4a3559c313d0023133e3f95b69204f4bff4145265bc85e9672b77f3
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:99e1ed9fbc8a8d36a70f148f25130c02e0e366875249906be0bcb2c2d9df0c26
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.1
size: "35320235"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:dc5c27d93d15f5ccf103e3116c6b9b03f83ce34e6807ee620e470eb1f9c0df8c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-690311
size: "1124"
- id: sha256:3f8a00f137a0d2c8a2163a09901e28e2471999fde4efc2f9570b91f1c30acf94
repoDigests:
- docker.io/library/nginx@sha256:6650513efd1d27c1f8a5351cbd33edf85cc7e0d9d0fcb4ffb23d8fa89b601ba8
repoTags:
- docker.io/library/nginx:latest
size: "56897816"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:46a6bb3c77ce01ed45ccef835bd95a08ec7ce09d3e2c4f63ed03c2c3b26b70fd
repoDigests:
- registry.k8s.io/kube-proxy@sha256:85f705e7d98158a67432c53885b0d470c673b0fad3693440b45d07efebcda1c3
repoTags:
- registry.k8s.io/kube-proxy:v1.26.1
size: "21536169"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-690311
size: "10823156"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests:
- registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "102542580"
- id: sha256:e9c08e11b07f68c1805c49e4ce66e5a9e6b2d59f6f65041c113b635095a7ad8d
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:40adecbe3a40aa147c7d6e9a1f5fbd99b3f6d42d5222483ed3a47337d4f9a10b
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.1
size: "32245960"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:655493523f6076092624c06fd5facf9541a9b3d54e6f3bf5a6e078ee7b1ba44f
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:af0292c2c4fa6d09ee8544445eef373c1c280113cb6c968398a37da3744c41e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.1
size: "17486267"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 ssh pgrep buildkitd
functional_test.go:305: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-690311 ssh pgrep buildkitd: exit status 1 (229.273154ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image build -t localhost/my-image:functional-690311 testdata/build
functional_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 image build -t localhost/my-image:functional-690311 testdata/build: (5.075696362s)
functional_test.go:320: (dbg) Stderr: out/minikube-linux-amd64 -p functional-690311 image build -t localhost/my-image:functional-690311 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 2.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.9s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.3s done
#8 exporting manifest sha256:b16fcfeb1b91b2d0227f797053aed0fece41024c6717baad03049964f93c6879 0.0s done
#8 exporting config sha256:9fd10139cf21c2a28c94415e6ddb0ca2f5f9ae2f058b958de4beaa1caf75f38d 0.0s done
#8 naming to localhost/my-image:functional-690311 done
#8 DONE 0.4s
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:339: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:339: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.705809564s)
functional_test.go:344: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-690311
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2084: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:352: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image load --daemon gcr.io/google-containers/addon-resizer:functional-690311
functional_test.go:352: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 image load --daemon gcr.io/google-containers/addon-resizer:functional-690311: (4.573590537s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:362: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image load --daemon gcr.io/google-containers/addon-resizer:functional-690311
functional_test.go:362: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 image load --daemon gcr.io/google-containers/addon-resizer:functional-690311: (4.561940303s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:232: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
2023/02/23 04:33:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:232: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.647427734s)
functional_test.go:237: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-690311
functional_test.go:242: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image load --daemon gcr.io/google-containers/addon-resizer:functional-690311
functional_test.go:242: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 image load --daemon gcr.io/google-containers/addon-resizer:functional-690311: (4.535632938s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (6.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:377: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image save gcr.io/google-containers/addon-resizer:functional-690311 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:377: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 image save gcr.io/google-containers/addon-resizer:functional-690311 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.38058657s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:389: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image rm gcr.io/google-containers/addon-resizer:functional-690311
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:406: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:406: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.690557465s)
functional_test.go:445: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:416: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-690311
functional_test.go:421: (dbg) Run:  out/minikube-linux-amd64 -p functional-690311 image save --daemon gcr.io/google-containers/addon-resizer:functional-690311
functional_test.go:421: (dbg) Done: out/minikube-linux-amd64 -p functional-690311 image save --daemon gcr.io/google-containers/addon-resizer:functional-690311: (1.359365331s)
functional_test.go:426: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-690311
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:187: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-690311
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:195: (dbg) Run:  docker rmi -f localhost/my-image:functional-690311
--- PASS: TestFunctional/delete_my-image_image (0.07s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:203: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-690311
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (101.01s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-680225 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0223 04:34:08.188450   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-680225 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m41.011689607s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (101.01s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.31s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-680225 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-680225 addons enable ingress --alsologtostderr -v=5: (18.308962701s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.31s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-680225 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.39s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (36.88s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-680225 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-680225 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (15.499355893s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-680225 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-680225 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6a87e405-96ba-4002-bc2a-39c360a065ea] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6a87e405-96ba-4002-bc2a-39c360a065ea] Running
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.013177891s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-680225 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-680225 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-680225 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.24
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-680225 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-680225 addons disable ingress-dns --alsologtostderr -v=1: (1.764441247s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-680225 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-680225 addons disable ingress --alsologtostderr -v=1: (7.382590982s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (36.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-236258 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0223 04:36:24.342376   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:36:52.029287   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-236258 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m18.411671757s)
--- PASS: TestJSONOutput/start/Command (78.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-236258 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-236258 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-236258 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-236258 --output=json --user=testUser: (7.092612205s)
--- PASS: TestJSONOutput/stop/Command (7.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.45s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-085672 --memory=2200 --output=json --wait=true --driver=fail
E0223 04:37:50.400526   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:37:50.405842   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:37:50.416121   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:37:50.436415   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-085672 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.396316ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9a618408-4133-42da-9050-95d0fd0ee431","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-085672] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3834ac8d-ddfb-4da1-add9-9b7639185acc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15909"}}
	{"specversion":"1.0","id":"28414b88-e454-4004-8e2a-18958bfde366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cbde8ae4-58d2-4587-a143-ee4174bb35a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig"}}
	{"specversion":"1.0","id":"139a08e8-7728-4496-a593-829139cd90d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube"}}
	{"specversion":"1.0","id":"cb09e7a6-85c6-4a96-ad9a-e3b7b170be83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f34dab34-3f70-478e-b535-862ddfbf8788","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f1c61913-f33a-4643-ab1d-7c2dcc2744c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-085672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-085672
E0223 04:37:50.477068   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:37:50.557379   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:37:50.717763   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
--- PASS: TestErrorJSONOutput (0.45s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (111.92s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-614660 --driver=kvm2  --container-runtime=containerd
E0223 04:37:51.037921   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:37:51.678422   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:37:52.958992   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:37:55.519869   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:38:00.640638   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:38:10.880936   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:38:31.361593   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-614660 --driver=kvm2  --container-runtime=containerd: (53.456403977s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-617241 --driver=kvm2  --container-runtime=containerd
E0223 04:39:12.321948   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-617241 --driver=kvm2  --container-runtime=containerd: (55.371927092s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-614660
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-617241
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-617241" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-617241
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-617241: (1.030933009s)
helpers_test.go:175: Cleaning up "first-614660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-614660
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-614660: (1.025867681s)
--- PASS: TestMinikubeProfile (111.92s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (29.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-568250 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-568250 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (28.54806099s)
--- PASS: TestMountStart/serial/StartWithMountFirst (29.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-568250 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-568250 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.42s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-580027 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0223 04:40:34.243928   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-580027 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (27.419901194s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.42s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580027 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580027 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.92s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-568250 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580027 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580027 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-580027
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-580027: (1.158234855s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-580027
E0223 04:40:45.126670   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:45.131951   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:45.142225   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:45.162511   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:45.202841   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:45.283174   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:45.443581   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:45.764134   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:46.405103   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:47.685322   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:50.247198   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:40:55.368146   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:41:05.608921   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-580027: (22.788550801s)
--- PASS: TestMountStart/serial/RestartStopped (23.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580027 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-580027 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (167.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-945787 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0223 04:41:24.342699   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:41:26.089926   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:42:07.050981   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:42:50.400445   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:43:18.084175   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:43:28.971301   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-945787 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m46.646480569s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (167.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-945787 -- rollout status deployment/busybox: (4.918184465s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-g48pr -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-gfc8r -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-g48pr -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-gfc8r -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-g48pr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-gfc8r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-g48pr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-g48pr -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-gfc8r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-945787 -- exec busybox-6b86dd6d48-gfc8r -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.87s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (68.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-945787 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-945787 -v 3 --alsologtostderr: (1m7.500479399s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (68.08s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp testdata/cp-test.txt multinode-945787:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile292436659/001/cp-test_multinode-945787.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787:/home/docker/cp-test.txt multinode-945787-m02:/home/docker/cp-test_multinode-945787_multinode-945787-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m02 "sudo cat /home/docker/cp-test_multinode-945787_multinode-945787-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787:/home/docker/cp-test.txt multinode-945787-m03:/home/docker/cp-test_multinode-945787_multinode-945787-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m03 "sudo cat /home/docker/cp-test_multinode-945787_multinode-945787-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp testdata/cp-test.txt multinode-945787-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile292436659/001/cp-test_multinode-945787-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787-m02:/home/docker/cp-test.txt multinode-945787:/home/docker/cp-test_multinode-945787-m02_multinode-945787.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787 "sudo cat /home/docker/cp-test_multinode-945787-m02_multinode-945787.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787-m02:/home/docker/cp-test.txt multinode-945787-m03:/home/docker/cp-test_multinode-945787-m02_multinode-945787-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m03 "sudo cat /home/docker/cp-test_multinode-945787-m02_multinode-945787-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp testdata/cp-test.txt multinode-945787-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile292436659/001/cp-test_multinode-945787-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787-m03:/home/docker/cp-test.txt multinode-945787:/home/docker/cp-test_multinode-945787-m03_multinode-945787.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787 "sudo cat /home/docker/cp-test_multinode-945787-m03_multinode-945787.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 cp multinode-945787-m03:/home/docker/cp-test.txt multinode-945787-m02:/home/docker/cp-test_multinode-945787-m03_multinode-945787-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 ssh -n multinode-945787-m02 "sudo cat /home/docker/cp-test_multinode-945787-m03_multinode-945787-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-945787 node stop m03: (1.295324976s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-945787 status: exit status 7 (423.414244ms)

                                                
                                                
-- stdout --
	multinode-945787
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-945787-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-945787-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-945787 status --alsologtostderr: exit status 7 (436.418336ms)

                                                
                                                
-- stdout --
	multinode-945787
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-945787-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-945787-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 04:45:21.837352   23122 out.go:296] Setting OutFile to fd 1 ...
	I0223 04:45:21.837468   23122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:45:21.837480   23122 out.go:309] Setting ErrFile to fd 2...
	I0223 04:45:21.837487   23122 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:45:21.837621   23122 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
	I0223 04:45:21.837816   23122 out.go:303] Setting JSON to false
	I0223 04:45:21.837849   23122 mustload.go:65] Loading cluster: multinode-945787
	I0223 04:45:21.837894   23122 notify.go:220] Checking for updates...
	I0223 04:45:21.839377   23122 config.go:182] Loaded profile config "multinode-945787": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0223 04:45:21.839440   23122 status.go:255] checking status of multinode-945787 ...
	I0223 04:45:21.840108   23122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:45:21.840151   23122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:45:21.857070   23122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43539
	I0223 04:45:21.857536   23122 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:45:21.858192   23122 main.go:141] libmachine: Using API Version  1
	I0223 04:45:21.858232   23122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:45:21.858754   23122 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:45:21.858947   23122 main.go:141] libmachine: (multinode-945787) Calling .GetState
	I0223 04:45:21.860522   23122 status.go:330] multinode-945787 host status = "Running" (err=<nil>)
	I0223 04:45:21.860539   23122 host.go:66] Checking if "multinode-945787" exists ...
	I0223 04:45:21.860811   23122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:45:21.860842   23122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:45:21.875082   23122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46847
	I0223 04:45:21.875468   23122 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:45:21.875934   23122 main.go:141] libmachine: Using API Version  1
	I0223 04:45:21.875956   23122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:45:21.876253   23122 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:45:21.876443   23122 main.go:141] libmachine: (multinode-945787) Calling .GetIP
	I0223 04:45:21.879358   23122 main.go:141] libmachine: (multinode-945787) DBG | domain multinode-945787 has defined MAC address 52:54:00:39:c5:34 in network mk-multinode-945787
	I0223 04:45:21.879839   23122 main.go:141] libmachine: (multinode-945787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c5:34", ip: ""} in network mk-multinode-945787: {Iface:virbr1 ExpiryTime:2023-02-23 05:41:24 +0000 UTC Type:0 Mac:52:54:00:39:c5:34 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-945787 Clientid:01:52:54:00:39:c5:34}
	I0223 04:45:21.879869   23122 main.go:141] libmachine: (multinode-945787) DBG | domain multinode-945787 has defined IP address 192.168.39.15 and MAC address 52:54:00:39:c5:34 in network mk-multinode-945787
	I0223 04:45:21.880027   23122 host.go:66] Checking if "multinode-945787" exists ...
	I0223 04:45:21.880396   23122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:45:21.880438   23122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:45:21.894529   23122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42017
	I0223 04:45:21.894934   23122 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:45:21.895362   23122 main.go:141] libmachine: Using API Version  1
	I0223 04:45:21.895386   23122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:45:21.895670   23122 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:45:21.895825   23122 main.go:141] libmachine: (multinode-945787) Calling .DriverName
	I0223 04:45:21.896027   23122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 04:45:21.896053   23122 main.go:141] libmachine: (multinode-945787) Calling .GetSSHHostname
	I0223 04:45:21.898635   23122 main.go:141] libmachine: (multinode-945787) DBG | domain multinode-945787 has defined MAC address 52:54:00:39:c5:34 in network mk-multinode-945787
	I0223 04:45:21.899099   23122 main.go:141] libmachine: (multinode-945787) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:c5:34", ip: ""} in network mk-multinode-945787: {Iface:virbr1 ExpiryTime:2023-02-23 05:41:24 +0000 UTC Type:0 Mac:52:54:00:39:c5:34 Iaid: IPaddr:192.168.39.15 Prefix:24 Hostname:multinode-945787 Clientid:01:52:54:00:39:c5:34}
	I0223 04:45:21.899123   23122 main.go:141] libmachine: (multinode-945787) DBG | domain multinode-945787 has defined IP address 192.168.39.15 and MAC address 52:54:00:39:c5:34 in network mk-multinode-945787
	I0223 04:45:21.899298   23122 main.go:141] libmachine: (multinode-945787) Calling .GetSSHPort
	I0223 04:45:21.899473   23122 main.go:141] libmachine: (multinode-945787) Calling .GetSSHKeyPath
	I0223 04:45:21.899614   23122 main.go:141] libmachine: (multinode-945787) Calling .GetSSHUsername
	I0223 04:45:21.899738   23122 sshutil.go:53] new ssh client: &{IP:192.168.39.15 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/multinode-945787/id_rsa Username:docker}
	I0223 04:45:21.988787   23122 ssh_runner.go:195] Run: systemctl --version
	I0223 04:45:21.994454   23122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 04:45:22.007895   23122 kubeconfig.go:92] found "multinode-945787" server: "https://192.168.39.15:8443"
	I0223 04:45:22.007923   23122 api_server.go:165] Checking apiserver status ...
	I0223 04:45:22.007959   23122 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0223 04:45:22.027988   23122 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1071/cgroup
	I0223 04:45:22.037867   23122 api_server.go:181] apiserver freezer: "3:freezer:/kubepods/burstable/pod8b1216234edfc3b631d335e88a264cf2/8d1e0cfbe5f326573230a09c33a86313823f5c08dcb24a2b28d7ec21eb9d6e59"
	I0223 04:45:22.037927   23122 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod8b1216234edfc3b631d335e88a264cf2/8d1e0cfbe5f326573230a09c33a86313823f5c08dcb24a2b28d7ec21eb9d6e59/freezer.state
	I0223 04:45:22.046213   23122 api_server.go:203] freezer state: "THAWED"
	I0223 04:45:22.046238   23122 api_server.go:252] Checking apiserver healthz at https://192.168.39.15:8443/healthz ...
	I0223 04:45:22.051104   23122 api_server.go:278] https://192.168.39.15:8443/healthz returned 200:
	ok
	I0223 04:45:22.051122   23122 status.go:421] multinode-945787 apiserver status = Running (err=<nil>)
	I0223 04:45:22.051130   23122 status.go:257] multinode-945787 status: &{Name:multinode-945787 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 04:45:22.051144   23122 status.go:255] checking status of multinode-945787-m02 ...
	I0223 04:45:22.051410   23122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:45:22.051448   23122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:45:22.066504   23122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36291
	I0223 04:45:22.066941   23122 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:45:22.067385   23122 main.go:141] libmachine: Using API Version  1
	I0223 04:45:22.067411   23122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:45:22.067758   23122 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:45:22.067894   23122 main.go:141] libmachine: (multinode-945787-m02) Calling .GetState
	I0223 04:45:22.069541   23122 status.go:330] multinode-945787-m02 host status = "Running" (err=<nil>)
	I0223 04:45:22.069571   23122 host.go:66] Checking if "multinode-945787-m02" exists ...
	I0223 04:45:22.069838   23122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:45:22.069882   23122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:45:22.084569   23122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
	I0223 04:45:22.085024   23122 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:45:22.085513   23122 main.go:141] libmachine: Using API Version  1
	I0223 04:45:22.085547   23122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:45:22.085852   23122 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:45:22.085989   23122 main.go:141] libmachine: (multinode-945787-m02) Calling .GetIP
	I0223 04:45:22.088659   23122 main.go:141] libmachine: (multinode-945787-m02) DBG | domain multinode-945787-m02 has defined MAC address 52:54:00:8a:af:de in network mk-multinode-945787
	I0223 04:45:22.089088   23122 main.go:141] libmachine: (multinode-945787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:af:de", ip: ""} in network mk-multinode-945787: {Iface:virbr1 ExpiryTime:2023-02-23 05:42:40 +0000 UTC Type:0 Mac:52:54:00:8a:af:de Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-945787-m02 Clientid:01:52:54:00:8a:af:de}
	I0223 04:45:22.089111   23122 main.go:141] libmachine: (multinode-945787-m02) DBG | domain multinode-945787-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8a:af:de in network mk-multinode-945787
	I0223 04:45:22.089300   23122 host.go:66] Checking if "multinode-945787-m02" exists ...
	I0223 04:45:22.089688   23122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:45:22.089736   23122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:45:22.104924   23122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37751
	I0223 04:45:22.105322   23122 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:45:22.105836   23122 main.go:141] libmachine: Using API Version  1
	I0223 04:45:22.105851   23122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:45:22.106148   23122 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:45:22.106330   23122 main.go:141] libmachine: (multinode-945787-m02) Calling .DriverName
	I0223 04:45:22.106505   23122 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0223 04:45:22.106526   23122 main.go:141] libmachine: (multinode-945787-m02) Calling .GetSSHHostname
	I0223 04:45:22.109254   23122 main.go:141] libmachine: (multinode-945787-m02) DBG | domain multinode-945787-m02 has defined MAC address 52:54:00:8a:af:de in network mk-multinode-945787
	I0223 04:45:22.109767   23122 main.go:141] libmachine: (multinode-945787-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8a:af:de", ip: ""} in network mk-multinode-945787: {Iface:virbr1 ExpiryTime:2023-02-23 05:42:40 +0000 UTC Type:0 Mac:52:54:00:8a:af:de Iaid: IPaddr:192.168.39.60 Prefix:24 Hostname:multinode-945787-m02 Clientid:01:52:54:00:8a:af:de}
	I0223 04:45:22.109806   23122 main.go:141] libmachine: (multinode-945787-m02) DBG | domain multinode-945787-m02 has defined IP address 192.168.39.60 and MAC address 52:54:00:8a:af:de in network mk-multinode-945787
	I0223 04:45:22.109923   23122 main.go:141] libmachine: (multinode-945787-m02) Calling .GetSSHPort
	I0223 04:45:22.110128   23122 main.go:141] libmachine: (multinode-945787-m02) Calling .GetSSHKeyPath
	I0223 04:45:22.110290   23122 main.go:141] libmachine: (multinode-945787-m02) Calling .GetSSHUsername
	I0223 04:45:22.110474   23122 sshutil.go:53] new ssh client: &{IP:192.168.39.60 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/15909-3857/.minikube/machines/multinode-945787-m02/id_rsa Username:docker}
	I0223 04:45:22.196573   23122 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0223 04:45:22.208468   23122 status.go:257] multinode-945787-m02 status: &{Name:multinode-945787-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0223 04:45:22.208497   23122 status.go:255] checking status of multinode-945787-m03 ...
	I0223 04:45:22.208838   23122 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:45:22.208886   23122 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:45:22.223343   23122 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I0223 04:45:22.223735   23122 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:45:22.224206   23122 main.go:141] libmachine: Using API Version  1
	I0223 04:45:22.224225   23122 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:45:22.224499   23122 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:45:22.224668   23122 main.go:141] libmachine: (multinode-945787-m03) Calling .GetState
	I0223 04:45:22.226156   23122 status.go:330] multinode-945787-m03 host status = "Stopped" (err=<nil>)
	I0223 04:45:22.226180   23122 status.go:343] host is not running, skipping remaining checks
	I0223 04:45:22.226185   23122 status.go:257] multinode-945787-m03 status: &{Name:multinode-945787-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (64.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 node start m03 --alsologtostderr
E0223 04:45:45.126377   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:46:12.812087   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:46:24.342283   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-945787 node start m03 --alsologtostderr: (1m3.560117317s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (64.20s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (484.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-945787
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-945787
E0223 04:47:47.390119   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:47:50.401306   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-945787: (3m4.132931036s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-945787 --wait=true -v=8 --alsologtostderr
E0223 04:50:45.126022   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:51:24.341727   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:52:50.400471   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 04:54:13.444999   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-945787 --wait=true -v=8 --alsologtostderr: (5m0.534304066s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-945787
--- PASS: TestMultiNode/serial/RestartKeepsNodes (484.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-945787 node delete m03: (1.46742834s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.00s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 stop
E0223 04:55:45.126479   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 04:56:24.343358   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 04:57:08.172323   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-945787 stop: (3m3.636814078s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-945787 status: exit status 7 (81.612793ms)

                                                
                                                
-- stdout --
	multinode-945787
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-945787-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-945787 status --alsologtostderr: exit status 7 (82.242313ms)

                                                
                                                
-- stdout --
	multinode-945787
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-945787-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 04:57:36.950511   24289 out.go:296] Setting OutFile to fd 1 ...
	I0223 04:57:36.950987   24289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:57:36.951039   24289 out.go:309] Setting ErrFile to fd 2...
	I0223 04:57:36.951059   24289 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 04:57:36.951293   24289 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
	I0223 04:57:36.951575   24289 out.go:303] Setting JSON to false
	I0223 04:57:36.951637   24289 mustload.go:65] Loading cluster: multinode-945787
	I0223 04:57:36.951733   24289 notify.go:220] Checking for updates...
	I0223 04:57:36.952770   24289 config.go:182] Loaded profile config "multinode-945787": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0223 04:57:36.952790   24289 status.go:255] checking status of multinode-945787 ...
	I0223 04:57:36.953144   24289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:57:36.953191   24289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:57:36.967595   24289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0223 04:57:36.967999   24289 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:57:36.968545   24289 main.go:141] libmachine: Using API Version  1
	I0223 04:57:36.968566   24289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:57:36.968956   24289 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:57:36.969125   24289 main.go:141] libmachine: (multinode-945787) Calling .GetState
	I0223 04:57:36.970784   24289 status.go:330] multinode-945787 host status = "Stopped" (err=<nil>)
	I0223 04:57:36.970801   24289 status.go:343] host is not running, skipping remaining checks
	I0223 04:57:36.970808   24289 status.go:257] multinode-945787 status: &{Name:multinode-945787 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0223 04:57:36.970855   24289 status.go:255] checking status of multinode-945787-m02 ...
	I0223 04:57:36.971181   24289 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0223 04:57:36.971225   24289 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0223 04:57:36.985655   24289 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33201
	I0223 04:57:36.986126   24289 main.go:141] libmachine: () Calling .GetVersion
	I0223 04:57:36.986664   24289 main.go:141] libmachine: Using API Version  1
	I0223 04:57:36.986690   24289 main.go:141] libmachine: () Calling .SetConfigRaw
	I0223 04:57:36.987020   24289 main.go:141] libmachine: () Calling .GetMachineName
	I0223 04:57:36.987241   24289 main.go:141] libmachine: (multinode-945787-m02) Calling .GetState
	I0223 04:57:36.988860   24289 status.go:330] multinode-945787-m02 host status = "Stopped" (err=<nil>)
	I0223 04:57:36.988876   24289 status.go:343] host is not running, skipping remaining checks
	I0223 04:57:36.988890   24289 status.go:257] multinode-945787-m02 status: &{Name:multinode-945787-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.80s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (299.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-945787 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0223 04:57:50.401451   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
E0223 05:00:45.126049   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 05:01:24.342260   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-945787 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (4m58.567830369s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-945787 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (299.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (60.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-945787
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-945787-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-945787-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (71.961616ms)

                                                
                                                
-- stdout --
	* [multinode-945787-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-945787-m02' is duplicated with machine name 'multinode-945787-m02' in profile 'multinode-945787'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-945787-m03 --driver=kvm2  --container-runtime=containerd
E0223 05:02:50.401122   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-945787-m03 --driver=kvm2  --container-runtime=containerd: (59.143325636s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-945787
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-945787: exit status 80 (224.905405ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-945787
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-945787-m03 already exists in multinode-945787-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-945787-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-945787-m03: (1.029928123s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (60.52s)

                                                
                                    
x
+
TestScheduledStopUnix (125.34s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-265757 --memory=2048 --driver=kvm2  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-265757 --memory=2048 --driver=kvm2  --container-runtime=containerd: (53.655564586s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265757 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-265757 -n scheduled-stop-265757
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265757 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265757 --cancel-scheduled
E0223 05:10:45.125812   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 05:10:53.445738   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265757 -n scheduled-stop-265757
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-265757
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-265757 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0223 05:11:24.342755   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-265757
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-265757: exit status 7 (64.126648ms)

                                                
                                                
-- stdout --
	scheduled-stop-265757
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265757 -n scheduled-stop-265757
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-265757 -n scheduled-stop-265757: exit status 7 (68.896542ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-265757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-265757
--- PASS: TestScheduledStopUnix (125.34s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (272.41s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.22.0.2276634819.exe start -p running-upgrade-120184 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0223 05:12:50.400548   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.22.0.2276634819.exe start -p running-upgrade-120184 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m54.270699938s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-120184 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-120184 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m33.483056863s)
helpers_test.go:175: Cleaning up "running-upgrade-120184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-120184
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-120184: (1.44664599s)
--- PASS: TestRunningBinaryUpgrade (272.41s)

                                                
                                    
x
+
TestKubernetesUpgrade (197.84s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-405965 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-405965 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m32.610523644s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-405965
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-405965: (4.099042915s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-405965 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-405965 status --format={{.Host}}: exit status 7 (74.411838ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-405965 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-405965 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m24.832205771s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-405965 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-405965 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-405965 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (114.978937ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-405965] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-405965
	    minikube start -p kubernetes-upgrade-405965 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4059652 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.1, by running:
	    
	    minikube start -p kubernetes-upgrade-405965 --kubernetes-version=v1.26.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-405965 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-405965 --memory=2200 --kubernetes-version=v1.26.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (14.741757158s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-405965" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-405965
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-405965: (1.296776937s)
--- PASS: TestKubernetesUpgrade (197.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982974 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-982974 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (92.757872ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-982974] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (106.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982974 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982974 --driver=kvm2  --container-runtime=containerd: (1m46.428090811s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-982974 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (106.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (41.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982974 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982974 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (39.886498999s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-982974 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-982974 status -o json: exit status 2 (235.135792ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-982974","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-982974
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-982974: (1.102452964s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (41.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.9s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.90s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (217.59s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.22.0.774121700.exe start -p stopped-upgrade-221064 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
E0223 05:13:48.173129   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.22.0.774121700.exe start -p stopped-upgrade-221064 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m22.035578344s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.22.0.774121700.exe -p stopped-upgrade-221064 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.22.0.774121700.exe -p stopped-upgrade-221064 stop: (4.129107766s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-221064 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-221064 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m11.422878215s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (217.59s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (31.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982974 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982974 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (31.014136179s)
--- PASS: TestNoKubernetes/serial/Start (31.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-982974 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-982974 "sudo systemctl is-active --quiet service kubelet": exit status 1 (222.585024ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-982974
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-982974: (1.674024043s)
--- PASS: TestNoKubernetes/serial/Stop (1.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (66.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-982974 --driver=kvm2  --container-runtime=containerd
E0223 05:15:45.126394   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-982974 --driver=kvm2  --container-runtime=containerd: (1m6.740066849s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (66.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-982974 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-982974 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.08076ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-221064
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p false-982980 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-982980 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (113.049091ms)

                                                
                                                
-- stdout --
	* [false-982980] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=15909
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0223 05:17:26.900216   31379 out.go:296] Setting OutFile to fd 1 ...
	I0223 05:17:26.900554   31379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 05:17:26.900574   31379 out.go:309] Setting ErrFile to fd 2...
	I0223 05:17:26.900582   31379 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0223 05:17:26.900847   31379 root.go:336] Updating PATH: /home/jenkins/minikube-integration/15909-3857/.minikube/bin
	I0223 05:17:26.901988   31379 out.go:303] Setting JSON to false
	I0223 05:17:26.903210   31379 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":3591,"bootTime":1677125856,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1029-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0223 05:17:26.903287   31379 start.go:135] virtualization: kvm guest
	I0223 05:17:26.906372   31379 out.go:177] * [false-982980] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0223 05:17:26.908123   31379 out.go:177]   - MINIKUBE_LOCATION=15909
	I0223 05:17:26.908150   31379 notify.go:220] Checking for updates...
	I0223 05:17:26.909933   31379 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0223 05:17:26.911685   31379 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/15909-3857/kubeconfig
	I0223 05:17:26.913360   31379 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/15909-3857/.minikube
	I0223 05:17:26.914948   31379 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0223 05:17:26.916496   31379 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0223 05:17:26.918326   31379 config.go:182] Loaded profile config "cert-expiration-984816": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0223 05:17:26.918440   31379 config.go:182] Loaded profile config "cert-options-603958": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0223 05:17:26.918531   31379 config.go:182] Loaded profile config "force-systemd-flag-731641": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.1
	I0223 05:17:26.918592   31379 driver.go:365] Setting default libvirt URI to qemu:///system
	I0223 05:17:26.955943   31379 out.go:177] * Using the kvm2 driver based on user configuration
	I0223 05:17:26.957473   31379 start.go:296] selected driver: kvm2
	I0223 05:17:26.957493   31379 start.go:857] validating driver "kvm2" against <nil>
	I0223 05:17:26.957507   31379 start.go:868] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0223 05:17:26.959970   31379 out.go:177] 
	W0223 05:17:26.961628   31379 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0223 05:17:26.963366   31379 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-982980 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-982980" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 23 Feb 2023 05:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.39.46:8443
name: cert-expiration-984816
contexts:
- context:
cluster: cert-expiration-984816
extensions:
- extension:
last-update: Thu, 23 Feb 2023 05:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: cert-expiration-984816
name: cert-expiration-984816
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-984816
user:
client-certificate: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/cert-expiration-984816/client.crt
client-key: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/cert-expiration-984816/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-982980

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-982980"

                                                
                                                
----------------------- debugLogs end: false-982980 [took: 3.219944794s] --------------------------------
helpers_test.go:175: Cleaning up "false-982980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-982980
--- PASS: TestNetworkPlugins/group/false (3.87s)

                                                
                                    
x
+
TestPause/serial/Start (120.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-784319 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
E0223 05:17:50.400658   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-784319 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (2m0.654297141s)
--- PASS: TestPause/serial/Start (120.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-755724 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-755724 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (2m44.686102866s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (160.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-251177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-251177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1: (2m40.129065354s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (160.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.72s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-784319 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-784319 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (5.704211077s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.72s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-784319 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-784319 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-784319 --output=json --layout=cluster: exit status 2 (243.138085ms)

                                                
                                                
-- stdout --
	{"Name":"pause-784319","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-784319","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.24s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.58s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-784319 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.58s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.76s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-784319 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.76s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.11s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-784319 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-784319 --alsologtostderr -v=5: (1.10569962s)
--- PASS: TestPause/serial/DeletePaused (1.11s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (6.05s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (6.045020615s)
--- PASS: TestPause/serial/VerifyDeletedResources (6.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (71.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-916508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-916508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1: (1m11.101047232s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (71.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-235579 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-235579 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1: (1m40.594481155s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-251177 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a3ae8c68-1931-415f-965e-d4de8207bb42] Pending
helpers_test.go:344: "busybox" [a3ae8c68-1931-415f-965e-d4de8207bb42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a3ae8c68-1931-415f-965e-d4de8207bb42] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.039126717s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-251177 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-755724 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [819e340d-3ab1-4964-8894-301454cd2424] Pending
helpers_test.go:344: "busybox" [819e340d-3ab1-4964-8894-301454cd2424] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [819e340d-3ab1-4964-8894-301454cd2424] Running
E0223 05:20:45.126775   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.024409848s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-755724 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-755724 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-755724 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-251177 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-251177 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067868812s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-251177 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (92.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-755724 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-755724 --alsologtostderr -v=3: (1m32.267433073s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (92.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (92.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-251177 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-251177 --alsologtostderr -v=3: (1m32.640260606s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (92.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-916508 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [259f2f91-685c-4905-8863-447788a94ec0] Pending
helpers_test.go:344: "busybox" [259f2f91-685c-4905-8863-447788a94ec0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0223 05:21:07.392365   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
helpers_test.go:344: "busybox" [259f2f91-685c-4905-8863-447788a94ec0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.021935745s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-916508 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-916508 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-916508 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-916508 --alsologtostderr -v=3
E0223 05:21:24.341435   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-916508 --alsologtostderr -v=3: (1m31.802012163s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-235579 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [80b99344-567f-4b5a-a9ce-6c933df16ac0] Pending
helpers_test.go:344: "busybox" [80b99344-567f-4b5a-a9ce-6c933df16ac0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [80b99344-567f-4b5a-a9ce-6c933df16ac0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.022647993s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-235579 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-235579 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-235579 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.86s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-235579 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-235579 --alsologtostderr -v=3: (1m31.854952991s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-755724 -n old-k8s-version-755724
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-755724 -n old-k8s-version-755724: exit status 7 (70.032395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-755724 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (372.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-755724 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-755724 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (6m12.123847209s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-755724 -n old-k8s-version-755724
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (372.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-251177 -n no-preload-251177
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-251177 -n no-preload-251177: exit status 7 (66.616002ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-251177 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (323.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-251177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-251177 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1: (5m23.591220777s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-251177 -n no-preload-251177
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (323.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-916508 -n embed-certs-916508
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-916508 -n embed-certs-916508: exit status 7 (100.544111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-916508 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (664.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-916508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1
E0223 05:22:50.400117   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-916508 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1: (11m3.925726071s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-916508 -n embed-certs-916508
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (664.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579: exit status 7 (64.515337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-235579 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (384.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-235579 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1
E0223 05:25:45.126153   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 05:26:24.341438   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
E0223 05:27:33.446986   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-235579 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1: (6m24.455986769s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (384.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9959l" [f5e165fb-fb40-4f41-8756-af816b8ad8c6] Running
E0223 05:27:50.399953   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017822704s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-9959l" [f5e165fb-fb40-4f41-8756-af816b8ad8c6] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007246956s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-251177 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-251177 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-251177 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-251177 -n no-preload-251177
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-251177 -n no-preload-251177: exit status 2 (264.504034ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-251177 -n no-preload-251177
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-251177 -n no-preload-251177: exit status 2 (249.820531ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-251177 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-251177 -n no-preload-251177
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-251177 -n no-preload-251177
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-139001 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-139001 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1: (1m5.197450271s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7wmqv" [75ef558b-f223-4113-8a21-88811e628e97] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.017467407s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7wmqv" [75ef558b-f223-4113-8a21-88811e628e97] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007552051s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-755724 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-755724 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-755724 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-755724 -n old-k8s-version-755724
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-755724 -n old-k8s-version-755724: exit status 2 (260.3848ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-755724 -n old-k8s-version-755724
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-755724 -n old-k8s-version-755724: exit status 2 (261.246433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-755724 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-755724 -n old-k8s-version-755724
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-755724 -n old-k8s-version-755724
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (113.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m53.260527942s)
--- PASS: TestNetworkPlugins/group/auto/Start (113.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-139001 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-139001 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028758818s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (15.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-139001 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-139001 --alsologtostderr -v=3: (15.292179443s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (15.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-139001 -n newest-cni-139001
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-139001 -n newest-cni-139001: exit status 7 (84.394695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-139001 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (90.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-139001 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-139001 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.1: (1m30.567179018s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-139001 -n newest-cni-139001
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (90.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rbltr" [19595ce2-c6a8-4611-b9a7-14d9dc5d4011] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rbltr" [19595ce2-c6a8-4611-b9a7-14d9dc5d4011] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.262354386s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-rbltr" [19595ce2-c6a8-4611-b9a7-14d9dc5d4011] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024959921s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-235579 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-235579 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-235579 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579: exit status 2 (246.548006ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579: exit status 2 (246.716228ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-235579 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-235579 -n default-k8s-diff-port-235579
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0223 05:30:28.174140   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
E0223 05:30:37.447869   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:37.453192   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:37.463533   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:37.483791   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:37.524067   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:37.604414   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:37.764933   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:38.085531   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:38.387076   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:38.392375   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:38.402659   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:38.423000   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:38.463464   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:38.543855   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:38.704332   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:38.726583   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:39.025210   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:39.665722   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
E0223 05:30:40.007189   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:40.946272   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m15.571635726s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-982980 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-982980 replace --force -f testdata/netcat-deployment.yaml
E0223 05:30:42.568032   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:43.507062   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
net_test.go:148: (dbg) Done: kubectl --context auto-982980 replace --force -f testdata/netcat-deployment.yaml: (2.356693083s)
E0223 05:30:45.126068   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/ingress-addon-legacy-680225/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-vr5dw" [5d6e5769-ecba-455c-b010-442070237655] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 05:30:47.689097   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:48.627724   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-vr5dw" [5d6e5769-ecba-455c-b010-442070237655] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.01322949s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-139001 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-139001 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-139001 -n newest-cni-139001
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-139001 -n newest-cni-139001: exit status 2 (267.371763ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-139001 -n newest-cni-139001
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-139001 -n newest-cni-139001: exit status 2 (281.535397ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-139001 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-139001 -n newest-cni-139001
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-139001 -n newest-cni-139001
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-982980 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (99.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E0223 05:30:57.929433   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:30:58.868963   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m39.536905185s)
--- PASS: TestNetworkPlugins/group/calico/Start (99.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (106.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0223 05:31:18.410221   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:31:19.350130   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m46.366927903s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (106.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-26gcs" [25939ba1-3a7f-4778-b750-6c8b31dd5ac9] Running
E0223 05:31:24.341959   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/addons-049813/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018997617s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-982980 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-982980 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-mc8sf" [4be0a5be-7df7-4350-a5fa-cef80929f5f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-mc8sf" [4be0a5be-7df7-4350-a5fa-cef80929f5f1] Running
E0223 05:31:37.654017   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:37.659327   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:37.669687   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:37.689987   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:37.730611   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:37.810975   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:37.971445   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:38.291844   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:38.932549   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
E0223 05:31:40.213569   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.016804367s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-982980 exec deployment/netcat -- nslookup kubernetes.default
E0223 05:31:42.774374   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (121.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
E0223 05:32:18.616010   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (2m1.436609627s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (121.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-qtxqz" [78bff75d-0eea-4b9c-8727-98bc86c0d099] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.023422234s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-982980 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-982980 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-z2zkp" [07f77ccf-beeb-4807-ac9e-dab540bc68c2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-z2zkp" [07f77ccf-beeb-4807-ac9e-dab540bc68c2] Running
E0223 05:32:50.399853   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/functional-690311/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.010989842s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-982980 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-982980 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-982980 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-jp2k6" [29974fd5-1756-4591-aabc-6286095fd804] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0223 05:32:59.576980   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/default-k8s-diff-port-235579/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-jp2k6" [29974fd5-1756-4591-aabc-6286095fd804] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.01151898s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-982980 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (92.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E0223 05:33:21.298194   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/no-preload-251177/client.crt: no such file or directory
E0223 05:33:22.231658   10897 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/old-k8s-version-755724/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m32.694387778s)
--- PASS: TestNetworkPlugins/group/flannel/Start (92.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-982980 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m27.61221539s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-xlswp" [59b3e28d-cf92-41d4-b8e6-8966cfbb5402] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.01773874s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-xlswp" [59b3e28d-cf92-41d4-b8e6-8966cfbb5402] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00862187s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-916508 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-916508 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-916508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-916508 -n embed-certs-916508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-916508 -n embed-certs-916508: exit status 2 (273.582088ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-916508 -n embed-certs-916508
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-916508 -n embed-certs-916508: exit status 2 (273.828519ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-916508 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-916508 -n embed-certs-916508
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-916508 -n embed-certs-916508
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-982980 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-982980 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-2p9wz" [41bee905-0cb0-414d-b502-5a545a7bdc2b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-2p9wz" [41bee905-0cb0-414d-b502-5a545a7bdc2b] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.011368532s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-982980 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-6ck9r" [cf31cfd8-e723-4708-ad55-8dc61850f8ff] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.019665s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-982980 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-982980 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-nw749" [9b301f3c-e6f2-4094-a120-f185d2edf2a6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-nw749" [9b301f3c-e6f2-4094-a120-f185d2edf2a6] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.0092815s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-982980 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-982980 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-g9sf4" [ea96c293-395a-4890-934e-c4f1da2f7782] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-g9sf4" [ea96c293-395a-4890-934e-c4f1da2f7782] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.009083146s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-982980 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-982980 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-982980 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (34/292)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.1/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:457: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:544: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.5s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-433505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-433505
--- SKIP: TestStartStop/group/disable-driver-mounts (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-982980 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-982980" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 23 Feb 2023 05:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.39.46:8443
name: cert-expiration-984816
contexts:
- context:
cluster: cert-expiration-984816
extensions:
- extension:
last-update: Thu, 23 Feb 2023 05:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: cert-expiration-984816
name: cert-expiration-984816
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-984816
user:
client-certificate: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/cert-expiration-984816/client.crt
client-key: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/cert-expiration-984816/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-982980

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-982980"

                                                
                                                
----------------------- debugLogs end: kubenet-982980 [took: 3.242494947s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-982980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-982980
--- SKIP: TestNetworkPlugins/group/kubenet (3.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-982980 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-982980" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 23 Feb 2023 05:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.39.46:8443
name: cert-expiration-984816
- cluster:
certificate-authority: /home/jenkins/minikube-integration/15909-3857/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 23 Feb 2023 05:17:32 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.50.116:8443
name: force-systemd-flag-731641
contexts:
- context:
cluster: cert-expiration-984816
extensions:
- extension:
last-update: Thu, 23 Feb 2023 05:16:45 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: cert-expiration-984816
name: cert-expiration-984816
- context:
cluster: force-systemd-flag-731641
extensions:
- extension:
last-update: Thu, 23 Feb 2023 05:17:32 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: force-systemd-flag-731641
name: force-systemd-flag-731641
current-context: force-systemd-flag-731641
kind: Config
preferences: {}
users:
- name: cert-expiration-984816
user:
client-certificate: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/cert-expiration-984816/client.crt
client-key: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/cert-expiration-984816/client.key
- name: force-systemd-flag-731641
user:
client-certificate: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/force-systemd-flag-731641/client.crt
client-key: /home/jenkins/minikube-integration/15909-3857/.minikube/profiles/force-systemd-flag-731641/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-982980

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-982980" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-982980"

                                                
                                                
----------------------- debugLogs end: cilium-982980 [took: 6.522253805s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-982980" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-982980
--- SKIP: TestNetworkPlugins/group/cilium (6.96s)

                                                
                                    
Copied to clipboard