Test Report: KVM_Linux_containerd 16124

                    
                      eeac85fe476c751393a203217177d94606b81c9d:2023-03-21:28422
                    
                

Test fail (1/297)

Order failed test Duration
210 TestPreload 392.9
x
+
TestPreload (392.9s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-778713 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4
E0321 22:31:15.067157   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:31:38.898209   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-778713 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m55.228324995s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-778713 -- sudo crictl pull gcr.io/k8s-minikube/busybox
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-778713
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-778713: (1m31.921439992s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-778713 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
E0321 22:33:29.737393   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-778713 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (3m1.820330015s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-778713 -- sudo crictl image ls
preload_test.go:85: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	IMAGE                                     TAG                  IMAGE ID            SIZE
	docker.io/kindest/kindnetd                v20220726-ed811e41   d921cee849482       25.8MB
	gcr.io/k8s-minikube/storage-provisioner   v5                   6e38f40d628db       9.06MB
	k8s.gcr.io/coredns/coredns                v1.8.6               a4ca41631cc7a       13.6MB
	k8s.gcr.io/etcd                           3.5.3-0              aebe758cef4cd       102MB
	k8s.gcr.io/kube-apiserver                 v1.24.4              6cab9d1bed1be       33.8MB
	k8s.gcr.io/kube-controller-manager        v1.24.4              1f99cb6da9a82       31MB
	k8s.gcr.io/kube-proxy                     v1.24.4              7a53d1e08ef58       39.5MB
	k8s.gcr.io/kube-scheduler                 v1.24.4              03fa22539fc1c       15.5MB
	k8s.gcr.io/pause                          3.7                  221177c6082a8       311kB

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-03-21 22:36:13.826252527 +0000 UTC m=+2806.272990007
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-778713 -n test-preload-778713
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-778713 logs -n 25
E0321 22:36:15.066229   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-778713 logs -n 25: (1.136011872s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-508124 ssh -n                                                                 | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
	|         | multinode-508124-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-508124 ssh -n multinode-508124 sudo cat                                       | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
	|         | /home/docker/cp-test_multinode-508124-m03_multinode-508124.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-508124 cp multinode-508124-m03:/home/docker/cp-test.txt                       | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
	|         | multinode-508124-m02:/home/docker/cp-test_multinode-508124-m03_multinode-508124-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-508124 ssh -n                                                                 | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
	|         | multinode-508124-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-508124 ssh -n multinode-508124-m02 sudo cat                                   | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
	|         | /home/docker/cp-test_multinode-508124-m03_multinode-508124-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-508124 node stop m03                                                          | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:10 UTC |
	| node    | multinode-508124 node start                                                             | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:10 UTC | 21 Mar 23 22:12 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-508124                                                                | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:12 UTC |                     |
	| stop    | -p multinode-508124                                                                     | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:12 UTC | 21 Mar 23 22:15 UTC |
	| start   | -p multinode-508124                                                                     | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:15 UTC | 21 Mar 23 22:21 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-508124                                                                | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:21 UTC |                     |
	| node    | multinode-508124 node delete                                                            | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:21 UTC | 21 Mar 23 22:21 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-508124 stop                                                                   | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:21 UTC | 21 Mar 23 22:24 UTC |
	| start   | -p multinode-508124                                                                     | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:24 UTC | 21 Mar 23 22:28 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | list -p multinode-508124                                                                | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:28 UTC |                     |
	| start   | -p multinode-508124-m02                                                                 | multinode-508124-m02 | jenkins | v1.29.0 | 21 Mar 23 22:28 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| start   | -p multinode-508124-m03                                                                 | multinode-508124-m03 | jenkins | v1.29.0 | 21 Mar 23 22:28 UTC | 21 Mar 23 22:29 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| node    | add -p multinode-508124                                                                 | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:29 UTC |                     |
	| delete  | -p multinode-508124-m03                                                                 | multinode-508124-m03 | jenkins | v1.29.0 | 21 Mar 23 22:29 UTC | 21 Mar 23 22:29 UTC |
	| delete  | -p multinode-508124                                                                     | multinode-508124     | jenkins | v1.29.0 | 21 Mar 23 22:29 UTC | 21 Mar 23 22:29 UTC |
	| start   | -p test-preload-778713                                                                  | test-preload-778713  | jenkins | v1.29.0 | 21 Mar 23 22:29 UTC | 21 Mar 23 22:31 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| ssh     | -p test-preload-778713                                                                  | test-preload-778713  | jenkins | v1.29.0 | 21 Mar 23 22:31 UTC | 21 Mar 23 22:31 UTC |
	|         | -- sudo crictl pull                                                                     |                      |         |         |                     |                     |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-778713                                                                  | test-preload-778713  | jenkins | v1.29.0 | 21 Mar 23 22:31 UTC | 21 Mar 23 22:33 UTC |
	| start   | -p test-preload-778713                                                                  | test-preload-778713  | jenkins | v1.29.0 | 21 Mar 23 22:33 UTC | 21 Mar 23 22:36 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=containerd                                                          |                      |         |         |                     |                     |
	| ssh     | -p test-preload-778713 -- sudo                                                          | test-preload-778713  | jenkins | v1.29.0 | 21 Mar 23 22:36 UTC | 21 Mar 23 22:36 UTC |
	|         | crictl image ls                                                                         |                      |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/21 22:33:11
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0321 22:33:11.803469   79998 out.go:296] Setting OutFile to fd 1 ...
	I0321 22:33:11.803569   79998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:33:11.803577   79998 out.go:309] Setting ErrFile to fd 2...
	I0321 22:33:11.803582   79998 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:33:11.803677   79998 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
	I0321 22:33:11.804199   79998 out.go:303] Setting JSON to false
	I0321 22:33:11.805067   79998 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":11742,"bootTime":1679426250,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 22:33:11.805117   79998 start.go:135] virtualization: kvm guest
	I0321 22:33:11.807536   79998 out.go:177] * [test-preload-778713] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 22:33:11.809385   79998 notify.go:220] Checking for updates...
	I0321 22:33:11.810889   79998 out.go:177]   - MINIKUBE_LOCATION=16124
	I0321 22:33:11.812910   79998 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 22:33:11.814316   79998 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	I0321 22:33:11.815681   79998 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	I0321 22:33:11.817037   79998 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0321 22:33:11.818354   79998 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0321 22:33:11.819917   79998 config.go:182] Loaded profile config "test-preload-778713": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0321 22:33:11.820268   79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:33:11.820310   79998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:33:11.833948   79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44343
	I0321 22:33:11.834305   79998 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:33:11.834808   79998 main.go:141] libmachine: Using API Version  1
	I0321 22:33:11.834831   79998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:33:11.835188   79998 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:33:11.835366   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:11.837005   79998 out.go:177] * Kubernetes 1.26.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.26.2
	I0321 22:33:11.838303   79998 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 22:33:11.838721   79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:33:11.838758   79998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:33:11.851819   79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
	I0321 22:33:11.852205   79998 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:33:11.852659   79998 main.go:141] libmachine: Using API Version  1
	I0321 22:33:11.852723   79998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:33:11.852991   79998 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:33:11.853170   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:11.884706   79998 out.go:177] * Using the kvm2 driver based on existing profile
	I0321 22:33:11.885956   79998 start.go:295] selected driver: kvm2
	I0321 22:33:11.885968   79998 start.go:856] validating driver "kvm2" against &{Name:test-preload-778713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-778713 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/min
ikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:33:11.886084   79998 start.go:867] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0321 22:33:11.886755   79998 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 22:33:11.886824   79998 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16124-57437/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0321 22:33:11.899407   79998 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0321 22:33:11.899696   79998 start_flags.go:919] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0321 22:33:11.899733   79998 cni.go:84] Creating CNI manager for ""
	I0321 22:33:11.899748   79998 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0321 22:33:11.899763   79998 start_flags.go:319] config:
	{Name:test-preload-778713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-778713 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144
MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:33:11.899872   79998 iso.go:125] acquiring lock: {Name:mkfce26b31a4ea2eba60da091679606a7e7271e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 22:33:11.901583   79998 out.go:177] * Starting control plane node test-preload-778713 in cluster test-preload-778713
	I0321 22:33:11.902824   79998 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0321 22:33:11.927873   79998 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0321 22:33:11.927894   79998 cache.go:57] Caching tarball of preloaded images
	I0321 22:33:11.928002   79998 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0321 22:33:11.929648   79998 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0321 22:33:11.930959   79998 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0321 22:33:11.963831   79998 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4?checksum=md5:41d292e9d8b8bb8fdf3bc94dc3c43bf0 -> /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4
	I0321 22:33:15.083975   79998 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0321 22:33:15.084057   79998 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 ...
	I0321 22:33:16.005314   79998 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on containerd
	I0321 22:33:16.005469   79998 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/config.json ...
	I0321 22:33:16.005676   79998 cache.go:193] Successfully downloaded all kic artifacts
	I0321 22:33:16.005706   79998 start.go:364] acquiring machines lock for test-preload-778713: {Name:mkb5caebff1efd48c9f7f7696365f0c61c19b667 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0321 22:33:16.005761   79998 start.go:368] acquired machines lock for "test-preload-778713" in 40.978µs
	I0321 22:33:16.005776   79998 start.go:96] Skipping create...Using existing machine configuration
	I0321 22:33:16.005781   79998 fix.go:55] fixHost starting: 
	I0321 22:33:16.006041   79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:33:16.006075   79998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:33:16.020069   79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45273
	I0321 22:33:16.020497   79998 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:33:16.021044   79998 main.go:141] libmachine: Using API Version  1
	I0321 22:33:16.021071   79998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:33:16.021386   79998 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:33:16.021612   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:16.021777   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetState
	I0321 22:33:16.023345   79998 fix.go:103] recreateIfNeeded on test-preload-778713: state=Stopped err=<nil>
	I0321 22:33:16.023385   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	W0321 22:33:16.023567   79998 fix.go:129] unexpected machine state, will restart: <nil>
	I0321 22:33:16.026198   79998 out.go:177] * Restarting existing kvm2 VM for "test-preload-778713" ...
	I0321 22:33:16.027628   79998 main.go:141] libmachine: (test-preload-778713) Calling .Start
	I0321 22:33:16.027789   79998 main.go:141] libmachine: (test-preload-778713) Ensuring networks are active...
	I0321 22:33:16.028483   79998 main.go:141] libmachine: (test-preload-778713) Ensuring network default is active
	I0321 22:33:16.028835   79998 main.go:141] libmachine: (test-preload-778713) Ensuring network mk-test-preload-778713 is active
	I0321 22:33:16.029195   79998 main.go:141] libmachine: (test-preload-778713) Getting domain xml...
	I0321 22:33:16.029810   79998 main.go:141] libmachine: (test-preload-778713) Creating domain...
	I0321 22:33:17.222988   79998 main.go:141] libmachine: (test-preload-778713) Waiting to get IP...
	I0321 22:33:17.223950   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:17.224306   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:17.224400   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:17.224308   80033 retry.go:31] will retry after 234.269246ms: waiting for machine to come up
	I0321 22:33:17.459749   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:17.460228   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:17.460254   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:17.460171   80033 retry.go:31] will retry after 374.02864ms: waiting for machine to come up
	I0321 22:33:17.835356   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:17.835739   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:17.835764   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:17.835683   80033 retry.go:31] will retry after 326.78501ms: waiting for machine to come up
	I0321 22:33:18.164110   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:18.164534   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:18.164566   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:18.164461   80033 retry.go:31] will retry after 543.227464ms: waiting for machine to come up
	I0321 22:33:18.709002   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:18.709469   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:18.709496   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:18.709422   80033 retry.go:31] will retry after 502.469144ms: waiting for machine to come up
	I0321 22:33:19.213235   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:19.213697   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:19.213721   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:19.213647   80033 retry.go:31] will retry after 587.0711ms: waiting for machine to come up
	I0321 22:33:19.802438   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:19.802937   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:19.802987   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:19.802864   80033 retry.go:31] will retry after 1.110796312s: waiting for machine to come up
	I0321 22:33:20.915024   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:20.915380   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:20.915401   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:20.915329   80033 retry.go:31] will retry after 1.258745231s: waiting for machine to come up
	I0321 22:33:22.175388   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:22.175735   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:22.175759   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:22.175708   80033 retry.go:31] will retry after 1.480442121s: waiting for machine to come up
	I0321 22:33:23.658653   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:23.659084   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:23.659137   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:23.659083   80033 retry.go:31] will retry after 2.001321941s: waiting for machine to come up
	I0321 22:33:25.663257   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:25.663728   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:25.663750   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:25.663669   80033 retry.go:31] will retry after 2.322790555s: waiting for machine to come up
	I0321 22:33:27.988573   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:27.989018   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:27.989048   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:27.988959   80033 retry.go:31] will retry after 2.488215716s: waiting for machine to come up
	I0321 22:33:30.479268   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:30.479623   79998 main.go:141] libmachine: (test-preload-778713) DBG | unable to find current IP address of domain test-preload-778713 in network mk-test-preload-778713
	I0321 22:33:30.479649   79998 main.go:141] libmachine: (test-preload-778713) DBG | I0321 22:33:30.479566   80033 retry.go:31] will retry after 3.795193672s: waiting for machine to come up
	I0321 22:33:34.278630   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.279107   79998 main.go:141] libmachine: (test-preload-778713) Found IP for machine: 192.168.39.129
	I0321 22:33:34.279137   79998 main.go:141] libmachine: (test-preload-778713) Reserving static IP address...
	I0321 22:33:34.279156   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has current primary IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.279461   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "test-preload-778713", mac: "52:54:00:24:1d:09", ip: "192.168.39.129"} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.279485   79998 main.go:141] libmachine: (test-preload-778713) DBG | skip adding static IP to network mk-test-preload-778713 - found existing host DHCP lease matching {name: "test-preload-778713", mac: "52:54:00:24:1d:09", ip: "192.168.39.129"}
	I0321 22:33:34.279496   79998 main.go:141] libmachine: (test-preload-778713) Reserved static IP address: 192.168.39.129
	I0321 22:33:34.279510   79998 main.go:141] libmachine: (test-preload-778713) Waiting for SSH to be available...
	I0321 22:33:34.279530   79998 main.go:141] libmachine: (test-preload-778713) DBG | Getting to WaitForSSH function...
	I0321 22:33:34.281473   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.281768   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.281800   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.281905   79998 main.go:141] libmachine: (test-preload-778713) DBG | Using SSH client type: external
	I0321 22:33:34.281930   79998 main.go:141] libmachine: (test-preload-778713) DBG | Using SSH private key: /home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa (-rw-------)
	I0321 22:33:34.281960   79998 main.go:141] libmachine: (test-preload-778713) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.129 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0321 22:33:34.281982   79998 main.go:141] libmachine: (test-preload-778713) DBG | About to run SSH command:
	I0321 22:33:34.281996   79998 main.go:141] libmachine: (test-preload-778713) DBG | exit 0
	I0321 22:33:34.377727   79998 main.go:141] libmachine: (test-preload-778713) DBG | SSH cmd err, output: <nil>: 
	I0321 22:33:34.378014   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetConfigRaw
	I0321 22:33:34.378614   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetIP
	I0321 22:33:34.380806   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.381087   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.381112   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.381388   79998 profile.go:148] Saving config to /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/config.json ...
	I0321 22:33:34.381571   79998 machine.go:88] provisioning docker machine ...
	I0321 22:33:34.381593   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:34.381798   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetMachineName
	I0321 22:33:34.381942   79998 buildroot.go:166] provisioning hostname "test-preload-778713"
	I0321 22:33:34.381964   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetMachineName
	I0321 22:33:34.382116   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:33:34.384279   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.384596   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.384628   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.384703   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:33:34.384880   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:34.385015   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:34.385140   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:33:34.385280   79998 main.go:141] libmachine: Using SSH client type: native
	I0321 22:33:34.385718   79998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0321 22:33:34.385735   79998 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-778713 && echo "test-preload-778713" | sudo tee /etc/hostname
	I0321 22:33:34.527761   79998 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-778713
	
	I0321 22:33:34.527794   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:33:34.530290   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.530630   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.530668   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.530774   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:33:34.530966   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:34.531121   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:34.531264   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:33:34.531417   79998 main.go:141] libmachine: Using SSH client type: native
	I0321 22:33:34.531852   79998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0321 22:33:34.531874   79998 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-778713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-778713/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-778713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0321 22:33:34.669299   79998 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0321 22:33:34.669331   79998 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/16124-57437/.minikube CaCertPath:/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/16124-57437/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/16124-57437/.minikube}
	I0321 22:33:34.669355   79998 buildroot.go:174] setting up certificates
	I0321 22:33:34.669378   79998 provision.go:83] configureAuth start
	I0321 22:33:34.669393   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetMachineName
	I0321 22:33:34.669624   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetIP
	I0321 22:33:34.672015   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.672342   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.672384   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.672539   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:33:34.674619   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.674908   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.674939   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.675046   79998 provision.go:138] copyHostCerts
	I0321 22:33:34.675102   79998 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-57437/.minikube/ca.pem, removing ...
	I0321 22:33:34.675112   79998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.pem
	I0321 22:33:34.675174   79998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/16124-57437/.minikube/ca.pem (1082 bytes)
	I0321 22:33:34.675251   79998 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-57437/.minikube/cert.pem, removing ...
	I0321 22:33:34.675262   79998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-57437/.minikube/cert.pem
	I0321 22:33:34.675291   79998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/16124-57437/.minikube/cert.pem (1123 bytes)
	I0321 22:33:34.675338   79998 exec_runner.go:144] found /home/jenkins/minikube-integration/16124-57437/.minikube/key.pem, removing ...
	I0321 22:33:34.675345   79998 exec_runner.go:207] rm: /home/jenkins/minikube-integration/16124-57437/.minikube/key.pem
	I0321 22:33:34.675365   79998 exec_runner.go:151] cp: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/16124-57437/.minikube/key.pem (1679 bytes)
	I0321 22:33:34.675407   79998 provision.go:112] generating server cert: /home/jenkins/minikube-integration/16124-57437/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca-key.pem org=jenkins.test-preload-778713 san=[192.168.39.129 192.168.39.129 localhost 127.0.0.1 minikube test-preload-778713]
	I0321 22:33:34.789603   79998 provision.go:172] copyRemoteCerts
	I0321 22:33:34.789653   79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0321 22:33:34.789670   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:33:34.791939   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.792226   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.792258   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.792391   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:33:34.792584   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:34.792779   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:33:34.792959   79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
	I0321 22:33:34.887216   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0321 22:33:34.909407   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0321 22:33:34.931156   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0321 22:33:34.952834   79998 provision.go:86] duration metric: configureAuth took 283.442577ms
	I0321 22:33:34.952857   79998 buildroot.go:189] setting minikube options for container-runtime
	I0321 22:33:34.953031   79998 config.go:182] Loaded profile config "test-preload-778713": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0321 22:33:34.953045   79998 machine.go:91] provisioned docker machine in 571.461456ms
	I0321 22:33:34.953055   79998 start.go:300] post-start starting for "test-preload-778713" (driver="kvm2")
	I0321 22:33:34.953064   79998 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0321 22:33:34.953108   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:34.953394   79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0321 22:33:34.953433   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:33:34.956372   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.956690   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:34.956719   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:34.956947   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:33:34.957142   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:34.957329   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:33:34.957500   79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
	I0321 22:33:35.052040   79998 ssh_runner.go:195] Run: cat /etc/os-release
	I0321 22:33:35.056208   79998 info.go:137] Remote host: Buildroot 2021.02.12
	I0321 22:33:35.056229   79998 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-57437/.minikube/addons for local assets ...
	I0321 22:33:35.056289   79998 filesync.go:126] Scanning /home/jenkins/minikube-integration/16124-57437/.minikube/files for local assets ...
	I0321 22:33:35.056362   79998 filesync.go:149] local asset: /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/644982.pem -> 644982.pem in /etc/ssl/certs
	I0321 22:33:35.056440   79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0321 22:33:35.065052   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/644982.pem --> /etc/ssl/certs/644982.pem (1708 bytes)
	I0321 22:33:35.086967   79998 start.go:303] post-start completed in 133.899031ms
	I0321 22:33:35.086984   79998 fix.go:57] fixHost completed within 19.081203401s
	I0321 22:33:35.087007   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:33:35.089478   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:35.089809   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:35.089849   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:35.090024   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:33:35.090218   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:35.090388   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:35.090580   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:33:35.090748   79998 main.go:141] libmachine: Using SSH client type: native
	I0321 22:33:35.091157   79998 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x1770c60] 0x1773e40 <nil>  [] 0s} 192.168.39.129 22 <nil> <nil>}
	I0321 22:33:35.091171   79998 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0321 22:33:35.218625   79998 main.go:141] libmachine: SSH cmd err, output: <nil>: 1679438015.168336345
	
	I0321 22:33:35.218663   79998 fix.go:207] guest clock: 1679438015.168336345
	I0321 22:33:35.218674   79998 fix.go:220] Guest: 2023-03-21 22:33:35.168336345 +0000 UTC Remote: 2023-03-21 22:33:35.086987671 +0000 UTC m=+23.322213811 (delta=81.348674ms)
	I0321 22:33:35.218700   79998 fix.go:191] guest clock delta is within tolerance: 81.348674ms
	I0321 22:33:35.218711   79998 start.go:83] releasing machines lock for "test-preload-778713", held for 19.212938969s
	I0321 22:33:35.218735   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:35.219015   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetIP
	I0321 22:33:35.221405   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:35.221868   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:35.221905   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:35.221967   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:35.222482   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:35.222642   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:33:35.222734   79998 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0321 22:33:35.222770   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:33:35.222899   79998 ssh_runner.go:195] Run: cat /version.json
	I0321 22:33:35.222933   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:33:35.225233   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:35.225478   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:35.225608   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:35.225637   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:35.225773   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:33:35.225895   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:35.225922   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:35.225925   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:35.226017   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:33:35.226090   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:33:35.226163   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:33:35.226214   79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
	I0321 22:33:35.226298   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:33:35.226437   79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
	I0321 22:33:35.338814   79998 ssh_runner.go:195] Run: systemctl --version
	I0321 22:33:35.344215   79998 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0321 22:33:35.349734   79998 cni.go:208] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0321 22:33:35.349787   79998 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0321 22:33:35.364702   79998 cni.go:261] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0321 22:33:35.364719   79998 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0321 22:33:35.364801   79998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0321 22:33:39.401849   79998 ssh_runner.go:235] Completed: sudo crictl images --output json: (4.037018637s)
	I0321 22:33:39.401965   79998 containerd.go:606] couldn't find preloaded image for "k8s.gcr.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0321 22:33:39.402031   79998 ssh_runner.go:195] Run: which lz4
	I0321 22:33:39.406543   79998 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0321 22:33:39.410932   79998 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0321 22:33:39.410992   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (458696921 bytes)
	I0321 22:33:41.210780   79998 containerd.go:553] Took 1.804267 seconds to copy over tarball
	I0321 22:33:41.210855   79998 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0321 22:33:44.258183   79998 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.047301821s)
	I0321 22:33:44.258210   79998 containerd.go:560] Took 3.047402 seconds to extract the tarball
	I0321 22:33:44.258219   79998 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0321 22:33:44.298745   79998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:33:44.390728   79998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0321 22:33:44.407273   79998 start.go:485] detecting cgroup driver to use...
	I0321 22:33:44.407344   79998 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0321 22:33:47.102820   79998 ssh_runner.go:235] Completed: sudo systemctl stop -f crio: (2.695445612s)
	I0321 22:33:47.102894   79998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0321 22:33:47.115624   79998 docker.go:186] disabling cri-docker service (if available) ...
	I0321 22:33:47.115671   79998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0321 22:33:47.126668   79998 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0321 22:33:47.137884   79998 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0321 22:33:47.231644   79998 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0321 22:33:47.333961   79998 docker.go:202] disabling docker service ...
	I0321 22:33:47.334023   79998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0321 22:33:47.346635   79998 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0321 22:33:47.357603   79998 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0321 22:33:47.457503   79998 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0321 22:33:47.562200   79998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0321 22:33:47.574112   79998 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0321 22:33:47.590414   79998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "k8s.gcr.io/pause:3.7"|' /etc/containerd/config.toml"
	I0321 22:33:47.600005   79998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0321 22:33:47.609401   79998 containerd.go:145] configuring containerd to use "cgroupfs" as cgroup driver...
	I0321 22:33:47.609442   79998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0321 22:33:47.620175   79998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:33:47.631405   79998 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0321 22:33:47.640756   79998 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0321 22:33:47.650177   79998 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0321 22:33:47.660193   79998 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0321 22:33:47.669789   79998 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0321 22:33:47.678285   79998 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0321 22:33:47.678327   79998 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0321 22:33:47.691556   79998 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0321 22:33:47.700301   79998 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0321 22:33:47.798490   79998 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0321 22:33:47.821937   79998 start.go:532] Will wait 60s for socket path /run/containerd/containerd.sock
	I0321 22:33:47.822001   79998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0321 22:33:47.827152   79998 retry.go:31] will retry after 692.932342ms: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I0321 22:33:48.521216   79998 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0321 22:33:48.526578   79998 start.go:553] Will wait 60s for crictl version
	I0321 22:33:48.526630   79998 ssh_runner.go:195] Run: which crictl
	I0321 22:33:48.530368   79998 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0321 22:33:48.562922   79998 start.go:569] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v1.6.19
	RuntimeApiVersion:  v1alpha2
	I0321 22:33:48.562969   79998 ssh_runner.go:195] Run: containerd --version
	I0321 22:33:48.592179   79998 ssh_runner.go:195] Run: containerd --version
	I0321 22:33:48.626446   79998 out.go:177] * Preparing Kubernetes v1.24.4 on containerd 1.6.19 ...
	I0321 22:33:48.627691   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetIP
	I0321 22:33:48.630171   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:48.630491   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:33:48.630519   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:33:48.630749   79998 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0321 22:33:48.634646   79998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:33:48.646216   79998 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime containerd
	I0321 22:33:48.646305   79998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0321 22:33:48.673713   79998 containerd.go:610] all images are preloaded for containerd runtime.
	I0321 22:33:48.673734   79998 containerd.go:524] Images already preloaded, skipping extraction
	I0321 22:33:48.673775   79998 ssh_runner.go:195] Run: sudo crictl images --output json
	I0321 22:33:48.700322   79998 containerd.go:610] all images are preloaded for containerd runtime.
	I0321 22:33:48.700344   79998 cache_images.go:84] Images are preloaded, skipping loading
	I0321 22:33:48.700383   79998 ssh_runner.go:195] Run: sudo crictl info
	I0321 22:33:48.727914   79998 cni.go:84] Creating CNI manager for ""
	I0321 22:33:48.727938   79998 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0321 22:33:48.727962   79998 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0321 22:33:48.727980   79998 kubeadm.go:172] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.129 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-778713 NodeName:test-preload-778713 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m]}
	I0321 22:33:48.728090   79998 kubeadm.go:177] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.129
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "test-preload-778713"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.129
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.129"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0321 22:33:48.728164   79998 kubeadm.go:968] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=test-preload-778713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-778713 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0321 22:33:48.728211   79998 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0321 22:33:48.737123   79998 binaries.go:44] Found k8s binaries, skipping transfer
	I0321 22:33:48.737169   79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0321 22:33:48.745695   79998 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (393 bytes)
	I0321 22:33:48.760844   79998 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0321 22:33:48.775300   79998 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2107 bytes)
	I0321 22:33:48.789961   79998 ssh_runner.go:195] Run: grep 192.168.39.129	control-plane.minikube.internal$ /etc/hosts
	I0321 22:33:48.793364   79998 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0321 22:33:48.804167   79998 certs.go:56] Setting up /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713 for IP: 192.168.39.129
	I0321 22:33:48.804195   79998 certs.go:186] acquiring lock for shared ca certs: {Name:mkac58eaa17acb86160b42b722a075f3da28a096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:33:48.804345   79998 certs.go:195] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.key
	I0321 22:33:48.804382   79998 certs.go:195] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/16124-57437/.minikube/proxy-client-ca.key
	I0321 22:33:48.804452   79998 certs.go:311] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.key
	I0321 22:33:48.804509   79998 certs.go:311] skipping minikube signed cert generation: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/apiserver.key.9233f9e0
	I0321 22:33:48.804546   79998 certs.go:311] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/proxy-client.key
	I0321 22:33:48.804642   79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/64498.pem (1338 bytes)
	W0321 22:33:48.804667   79998 certs.go:397] ignoring /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/64498_empty.pem, impossibly tiny 0 bytes
	I0321 22:33:48.804678   79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca-key.pem (1679 bytes)
	I0321 22:33:48.804705   79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/ca.pem (1082 bytes)
	I0321 22:33:48.804730   79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/cert.pem (1123 bytes)
	I0321 22:33:48.804752   79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/certs/home/jenkins/minikube-integration/16124-57437/.minikube/certs/key.pem (1679 bytes)
	I0321 22:33:48.804793   79998 certs.go:401] found cert: /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/644982.pem (1708 bytes)
	I0321 22:33:48.805312   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0321 22:33:48.826201   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0321 22:33:48.846767   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0321 22:33:48.867693   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0321 22:33:48.888542   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0321 22:33:48.909457   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0321 22:33:48.930110   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0321 22:33:48.951210   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0321 22:33:48.972018   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0321 22:33:48.992468   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/certs/64498.pem --> /usr/share/ca-certificates/64498.pem (1338 bytes)
	I0321 22:33:49.013116   79998 ssh_runner.go:362] scp /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/ssl/certs/644982.pem --> /usr/share/ca-certificates/644982.pem (1708 bytes)
	I0321 22:33:49.033767   79998 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0321 22:33:49.048517   79998 ssh_runner.go:195] Run: openssl version
	I0321 22:33:49.053687   79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/64498.pem && ln -fs /usr/share/ca-certificates/64498.pem /etc/ssl/certs/64498.pem"
	I0321 22:33:49.063325   79998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/64498.pem
	I0321 22:33:49.067623   79998 certs.go:444] hashing: -rw-r--r-- 1 root root 1338 Mar 21 21:55 /usr/share/ca-certificates/64498.pem
	I0321 22:33:49.067661   79998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/64498.pem
	I0321 22:33:49.072537   79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/64498.pem /etc/ssl/certs/51391683.0"
	I0321 22:33:49.081936   79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/644982.pem && ln -fs /usr/share/ca-certificates/644982.pem /etc/ssl/certs/644982.pem"
	I0321 22:33:49.091443   79998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/644982.pem
	I0321 22:33:49.095719   79998 certs.go:444] hashing: -rw-r--r-- 1 root root 1708 Mar 21 21:55 /usr/share/ca-certificates/644982.pem
	I0321 22:33:49.095754   79998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/644982.pem
	I0321 22:33:49.100681   79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/644982.pem /etc/ssl/certs/3ec20f2e.0"
	I0321 22:33:49.110305   79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0321 22:33:49.119933   79998 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:33:49.124001   79998 certs.go:444] hashing: -rw-r--r-- 1 root root 1111 Mar 21 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:33:49.124031   79998 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0321 22:33:49.129072   79998 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0321 22:33:49.138648   79998 kubeadm.go:401] StartCluster: {Name:test-preload-778713 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-778713 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9P
Version:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 22:33:49.138747   79998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0321 22:33:49.138785   79998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0321 22:33:49.165591   79998 cri.go:87] found id: ""
	I0321 22:33:49.165635   79998 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0321 22:33:49.174203   79998 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I0321 22:33:49.174216   79998 kubeadm.go:633] restartCluster start
	I0321 22:33:49.174253   79998 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0321 22:33:49.182484   79998 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:49.182930   79998 kubeconfig.go:135] verify returned: extract IP: "test-preload-778713" does not appear in /home/jenkins/minikube-integration/16124-57437/kubeconfig
	I0321 22:33:49.183065   79998 kubeconfig.go:146] "test-preload-778713" context is missing from /home/jenkins/minikube-integration/16124-57437/kubeconfig - will repair!
	I0321 22:33:49.183303   79998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-57437/kubeconfig: {Name:mk8ee86e6b55120ac24d22c302b6f0547947acf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:33:49.183893   79998 kapi.go:59] client config for test-preload-778713: &rest.Config{Host:"https://192.168.39.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.key", CAFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:33:49.184685   79998 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0321 22:33:49.193162   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:49.193205   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:49.203910   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:49.704560   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:49.704652   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:49.716040   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:50.204739   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:50.204841   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:50.216399   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:50.704081   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:50.704161   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:50.715569   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:51.204586   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:51.204656   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:51.215928   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:51.704456   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:51.704553   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:51.716501   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:52.204274   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:52.204353   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:52.215515   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:52.704074   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:52.704175   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:52.715901   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:53.204448   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:53.204543   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:53.216087   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:53.704692   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:53.704762   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:53.716111   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:54.204795   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:54.204893   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:54.216907   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:54.704488   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:54.704563   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:54.716013   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:55.204623   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:55.204698   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:55.215822   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:55.704365   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:55.704461   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:55.715844   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:56.204658   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:56.204737   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:56.216268   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:56.704859   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:56.704947   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:56.717204   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:57.204796   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:57.204882   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:57.216236   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:57.704891   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:57.704997   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:57.716601   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:58.204209   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:58.204298   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:58.215866   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:58.704408   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:58.704498   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:58.715978   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:59.204713   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:59.204787   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:59.215820   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:59.215840   79998 api_server.go:165] Checking apiserver status ...
	I0321 22:33:59.215884   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0321 22:33:59.226641   79998 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0321 22:33:59.226665   79998 kubeadm.go:608] needs reconfigure: apiserver error: timed out waiting for the condition
	I0321 22:33:59.226671   79998 kubeadm.go:1120] stopping kube-system containers ...
	I0321 22:33:59.226695   79998 cri.go:52] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I0321 22:33:59.226746   79998 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0321 22:33:59.254555   79998 cri.go:87] found id: ""
	I0321 22:33:59.254619   79998 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0321 22:33:59.269468   79998 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0321 22:33:59.277733   79998 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0321 22:33:59.277785   79998 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0321 22:33:59.285731   79998 kubeadm.go:710] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0321 22:33:59.285747   79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0321 22:33:59.382863   79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0321 22:34:00.023599   79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0321 22:34:00.338109   79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0321 22:34:00.432527   79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0321 22:34:00.504048   79998 api_server.go:51] waiting for apiserver process to appear ...
	I0321 22:34:00.504128   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:01.020577   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:01.520439   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:02.021144   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:02.520963   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:03.020423   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:03.521196   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:04.020474   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:04.521055   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:05.020388   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:05.520432   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:06.020341   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:06.520597   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:07.020897   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:07.520378   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:08.020738   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:08.520538   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:09.020273   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:09.521307   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:10.020559   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:10.521321   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:11.020940   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:11.521168   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:12.020457   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:12.520922   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:13.020911   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:13.520762   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:14.020679   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:14.521101   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:15.020444   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:15.521178   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:16.021120   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:16.520453   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:34:16.533579   79998 api_server.go:71] duration metric: took 16.02953311s to wait for apiserver process to appear ...
	I0321 22:34:16.533602   79998 api_server.go:87] waiting for apiserver healthz status ...
	I0321 22:34:16.533616   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:21.534194   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0321 22:34:22.035100   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:27.036188   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0321 22:34:27.534783   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:32.535848   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0321 22:34:33.034417   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:36.587602   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": read tcp 192.168.39.1:41520->192.168.39.129:8443: read: connection reset by peer
	I0321 22:34:37.035156   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:37.035680   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:37.535337   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:37.535928   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:38.034602   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:38.035292   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:38.534947   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:38.535491   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:39.035098   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:39.035699   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:39.534512   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:39.535218   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:40.034801   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:40.035369   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:40.535084   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:40.535700   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:41.034396   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:41.035004   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:41.534951   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:41.535531   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:42.034545   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:42.035146   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:42.534720   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:42.535322   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:43.034945   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:43.035497   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:43.535112   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:43.535699   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:44.034372   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:44.034982   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:44.534528   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:44.535164   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:45.034749   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:45.035352   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:45.534964   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:45.535594   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:46.035248   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:46.035914   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:46.534828   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:46.535399   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:47.034934   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:47.035583   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:47.535273   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:47.535975   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:48.034560   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:48.035207   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:48.534748   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:48.535384   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:49.035174   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:49.035796   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:49.534663   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:49.535371   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:50.034994   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:50.035575   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:50.535193   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:50.535861   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:51.034406   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:51.034988   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:51.535085   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:51.535704   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:52.034612   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:52.035203   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:52.534746   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:52.535323   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:53.034978   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:53.035650   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:53.535260   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:53.535886   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:54.034462   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:54.035056   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:54.534593   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:54.535203   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:55.034757   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:55.035402   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:55.535045   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:55.535605   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:56.035257   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:56.035964   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:56.535085   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:56.535698   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:57.035361   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:57.035969   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:57.534528   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:57.535094   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:58.034641   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:34:58.035290   79998 api_server.go:268] stopped: https://192.168.39.129:8443/healthz: Get "https://192.168.39.129:8443/healthz": dial tcp 192.168.39.129:8443: connect: connection refused
	I0321 22:34:58.534842   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:35:01.123844   79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0321 22:35:01.123872   79998 api_server.go:102] status: https://192.168.39.129:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0321 22:35:01.534401   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:35:01.540218   79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0321 22:35:01.540240   79998 api_server.go:102] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0321 22:35:02.034721   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:35:02.040713   79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0321 22:35:02.040739   79998 api_server.go:102] status: https://192.168.39.129:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0321 22:35:02.534330   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:35:02.540353   79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0321 22:35:02.547672   79998 api_server.go:140] control plane version: v1.24.4
	I0321 22:35:02.547698   79998 api_server.go:130] duration metric: took 46.014088995s to wait for apiserver health ...
	I0321 22:35:02.547712   79998 cni.go:84] Creating CNI manager for ""
	I0321 22:35:02.547720   79998 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0321 22:35:02.549470   79998 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0321 22:35:02.550720   79998 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0321 22:35:02.561781   79998 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0321 22:35:02.580474   79998 system_pods.go:43] waiting for kube-system pods to appear ...
	I0321 22:35:02.588242   79998 system_pods.go:59] 7 kube-system pods found
	I0321 22:35:02.588267   79998 system_pods.go:61] "coredns-6d4b75cb6d-4zkrg" [9ba80daf-32d4-41a3-a1bd-7c8b3168a4db] Running
	I0321 22:35:02.588275   79998 system_pods.go:61] "etcd-test-preload-778713" [ceeb8dba-f8d6-4d4b-ae99-3f8295266274] Running
	I0321 22:35:02.588281   79998 system_pods.go:61] "kube-apiserver-test-preload-778713" [518a0d87-b51c-443f-8542-75e44a061897] Running
	I0321 22:35:02.588288   79998 system_pods.go:61] "kube-controller-manager-test-preload-778713" [e5ef86be-1e24-4dd4-8934-d0c609c733f4] Running
	I0321 22:35:02.588293   79998 system_pods.go:61] "kube-proxy-vdrfz" [42f3e5be-8516-465e-8d63-949a1de4a66d] Running
	I0321 22:35:02.588306   79998 system_pods.go:61] "kube-scheduler-test-preload-778713" [932e8280-bfba-4a2d-912c-374f30a8cc37] Running
	I0321 22:35:02.588313   79998 system_pods.go:61] "storage-provisioner" [15af5481-be73-4e4b-8d93-f78926fa2edf] Running
	I0321 22:35:02.588320   79998 system_pods.go:74] duration metric: took 7.824362ms to wait for pod list to return data ...
	I0321 22:35:02.588329   79998 node_conditions.go:102] verifying NodePressure condition ...
	I0321 22:35:02.591408   79998 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0321 22:35:02.591433   79998 node_conditions.go:123] node cpu capacity is 2
	I0321 22:35:02.591447   79998 node_conditions.go:105] duration metric: took 3.111739ms to run NodePressure ...
	I0321 22:35:02.591465   79998 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0321 22:35:02.782046   79998 kubeadm.go:769] waiting for restarted kubelet to initialise ...
	I0321 22:35:02.786049   79998 retry.go:31] will retry after 168.284477ms: kubelet not initialised
	I0321 22:35:02.959784   79998 retry.go:31] will retry after 405.745497ms: kubelet not initialised
	I0321 22:35:03.370937   79998 retry.go:31] will retry after 689.497642ms: kubelet not initialised
	I0321 22:35:04.065310   79998 retry.go:31] will retry after 1.025423078s: kubelet not initialised
	I0321 22:35:05.097032   79998 retry.go:31] will retry after 1.195125094s: kubelet not initialised
	I0321 22:35:06.298676   79998 retry.go:31] will retry after 1.772228539s: kubelet not initialised
	I0321 22:35:08.078802   79998 retry.go:31] will retry after 3.395567739s: kubelet not initialised
	I0321 22:35:11.483486   79998 retry.go:31] will retry after 4.378086122s: kubelet not initialised
	I0321 22:35:15.869890   79998 retry.go:31] will retry after 6.120616139s: kubelet not initialised
	I0321 22:35:21.996055   79998 kubeadm.go:784] kubelet initialised
	I0321 22:35:21.996080   79998 kubeadm.go:785] duration metric: took 19.214013885s waiting for restarted kubelet to initialise ...
	I0321 22:35:21.996088   79998 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:35:22.001693   79998 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace to be "Ready" ...
	I0321 22:35:24.015470   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:26.513887   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:29.013017   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:31.013502   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:33.015110   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:35.515475   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:38.016882   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:40.513881   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:43.014103   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:45.015375   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:47.514276   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:49.514878   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:51.515303   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:54.014711   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:56.515662   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:35:59.013583   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:36:01.515599   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:36:04.014655   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:36:06.514208   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:36:08.515354   79998 pod_ready.go:102] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"False"
	I0321 22:36:09.516135   79998 pod_ready.go:92] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:09.516163   79998 pod_ready.go:81] duration metric: took 47.514446645s waiting for pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.516174   79998 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.521246   79998 pod_ready.go:92] pod "etcd-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:09.521263   79998 pod_ready.go:81] duration metric: took 5.083367ms waiting for pod "etcd-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.521271   79998 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.525765   79998 pod_ready.go:92] pod "kube-apiserver-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:09.525789   79998 pod_ready.go:81] duration metric: took 4.509946ms waiting for pod "kube-apiserver-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.525801   79998 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.530302   79998 pod_ready.go:92] pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:09.530322   79998 pod_ready.go:81] duration metric: took 4.512556ms waiting for pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.530334   79998 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-vdrfz" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.534782   79998 pod_ready.go:92] pod "kube-proxy-vdrfz" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:09.534799   79998 pod_ready.go:81] duration metric: took 4.458247ms waiting for pod "kube-proxy-vdrfz" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.534807   79998 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.915118   79998 pod_ready.go:92] pod "kube-scheduler-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:09.915146   79998 pod_ready.go:81] duration metric: took 380.33221ms waiting for pod "kube-scheduler-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:09.915161   79998 pod_ready.go:38] duration metric: took 47.919062912s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:36:09.915186   79998 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0321 22:36:09.928182   79998 ops.go:34] apiserver oom_adj: -16
	I0321 22:36:09.928205   79998 kubeadm.go:637] restartCluster took 2m20.753981878s
	I0321 22:36:09.928215   79998 kubeadm.go:403] StartCluster complete in 2m20.789574221s
	I0321 22:36:09.928237   79998 settings.go:142] acquiring lock: {Name:mk79799ddbbfcee95eba9c02d869416a2516522c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:36:09.928365   79998 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/16124-57437/kubeconfig
	I0321 22:36:09.929176   79998 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/16124-57437/kubeconfig: {Name:mk8ee86e6b55120ac24d22c302b6f0547947acf4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0321 22:36:09.929448   79998 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0321 22:36:09.929596   79998 addons.go:496] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0321 22:36:09.929698   79998 addons.go:66] Setting storage-provisioner=true in profile "test-preload-778713"
	I0321 22:36:09.929721   79998 addons.go:228] Setting addon storage-provisioner=true in "test-preload-778713"
	W0321 22:36:09.929728   79998 addons.go:237] addon storage-provisioner should already be in state true
	I0321 22:36:09.929722   79998 config.go:182] Loaded profile config "test-preload-778713": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.24.4
	I0321 22:36:09.929745   79998 addons.go:66] Setting default-storageclass=true in profile "test-preload-778713"
	I0321 22:36:09.929781   79998 host.go:66] Checking if "test-preload-778713" exists ...
	I0321 22:36:09.929784   79998 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-778713"
	I0321 22:36:09.930069   79998 kapi.go:59] client config for test-preload-778713: &rest.Config{Host:"https://192.168.39.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.key", CAFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:36:09.930219   79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:36:09.930294   79998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:36:09.930399   79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:36:09.930450   79998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:36:09.933521   79998 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-778713" context rescaled to 1 replicas
	I0321 22:36:09.933570   79998 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.129 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0321 22:36:09.937056   79998 out.go:177] * Verifying Kubernetes components...
	I0321 22:36:09.938444   79998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:36:09.945989   79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39447
	I0321 22:36:09.946018   79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46659
	I0321 22:36:09.946422   79998 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:36:09.946455   79998 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:36:09.946953   79998 main.go:141] libmachine: Using API Version  1
	I0321 22:36:09.946982   79998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:36:09.947093   79998 main.go:141] libmachine: Using API Version  1
	I0321 22:36:09.947114   79998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:36:09.947328   79998 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:36:09.947458   79998 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:36:09.947685   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetState
	I0321 22:36:09.947841   79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:36:09.947888   79998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:36:09.950157   79998 kapi.go:59] client config for test-preload-778713: &rest.Config{Host:"https://192.168.39.129:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.crt", KeyFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/profiles/test-preload-778713/client.key", CAFile:"/home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(
nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x29db960), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0321 22:36:09.958884   79998 addons.go:228] Setting addon default-storageclass=true in "test-preload-778713"
	W0321 22:36:09.958912   79998 addons.go:237] addon default-storageclass should already be in state true
	I0321 22:36:09.958942   79998 host.go:66] Checking if "test-preload-778713" exists ...
	I0321 22:36:09.959317   79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:36:09.959344   79998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:36:09.967028   79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36605
	I0321 22:36:09.967509   79998 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:36:09.968079   79998 main.go:141] libmachine: Using API Version  1
	I0321 22:36:09.968108   79998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:36:09.968513   79998 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:36:09.968747   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetState
	I0321 22:36:09.970699   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:36:09.973594   79998 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0321 22:36:09.975304   79998 addons.go:420] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0321 22:36:09.975327   79998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0321 22:36:09.975350   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:36:09.976403   79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40769
	I0321 22:36:09.976821   79998 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:36:09.977414   79998 main.go:141] libmachine: Using API Version  1
	I0321 22:36:09.977440   79998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:36:09.977773   79998 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:36:09.978383   79998 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:36:09.978413   79998 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:36:09.979084   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:36:09.979567   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:36:09.979599   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:36:09.979747   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:36:09.979959   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:36:09.980130   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:36:09.980275   79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
	I0321 22:36:09.992945   79998 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0321 22:36:09.993352   79998 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:36:09.993830   79998 main.go:141] libmachine: Using API Version  1
	I0321 22:36:09.993849   79998 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:36:09.994176   79998 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:36:09.994414   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetState
	I0321 22:36:09.995852   79998 main.go:141] libmachine: (test-preload-778713) Calling .DriverName
	I0321 22:36:09.996134   79998 addons.go:420] installing /etc/kubernetes/addons/storageclass.yaml
	I0321 22:36:09.996151   79998 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0321 22:36:09.996166   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHHostname
	I0321 22:36:09.998970   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:36:09.999442   79998 main.go:141] libmachine: (test-preload-778713) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:1d:09", ip: ""} in network mk-test-preload-778713: {Iface:virbr1 ExpiryTime:2023-03-21 23:33:27 +0000 UTC Type:0 Mac:52:54:00:24:1d:09 Iaid: IPaddr:192.168.39.129 Prefix:24 Hostname:test-preload-778713 Clientid:01:52:54:00:24:1d:09}
	I0321 22:36:09.999477   79998 main.go:141] libmachine: (test-preload-778713) DBG | domain test-preload-778713 has defined IP address 192.168.39.129 and MAC address 52:54:00:24:1d:09 in network mk-test-preload-778713
	I0321 22:36:09.999603   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHPort
	I0321 22:36:09.999761   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHKeyPath
	I0321 22:36:09.999899   79998 main.go:141] libmachine: (test-preload-778713) Calling .GetSSHUsername
	I0321 22:36:10.000056   79998 sshutil.go:53] new ssh client: &{IP:192.168.39.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/test-preload-778713/id_rsa Username:docker}
	I0321 22:36:10.103266   79998 node_ready.go:35] waiting up to 6m0s for node "test-preload-778713" to be "Ready" ...
	I0321 22:36:10.103306   79998 start.go:894] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0321 22:36:10.111502   79998 node_ready.go:49] node "test-preload-778713" has status "Ready":"True"
	I0321 22:36:10.111523   79998 node_ready.go:38] duration metric: took 8.222478ms waiting for node "test-preload-778713" to be "Ready" ...
	I0321 22:36:10.111531   79998 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:36:10.125083   79998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0321 22:36:10.126283   79998 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0321 22:36:10.314803   79998 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:10.713322   79998 pod_ready.go:92] pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:10.713359   79998 pod_ready.go:81] duration metric: took 398.525046ms waiting for pod "coredns-6d4b75cb6d-4zkrg" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:10.713373   79998 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:10.887489   79998 main.go:141] libmachine: Making call to close driver server
	I0321 22:36:10.887526   79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
	I0321 22:36:10.887924   79998 main.go:141] libmachine: (test-preload-778713) DBG | Closing plugin on server side
	I0321 22:36:10.888007   79998 main.go:141] libmachine: Successfully made call to close driver server
	I0321 22:36:10.888032   79998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0321 22:36:10.888049   79998 main.go:141] libmachine: Making call to close driver server
	I0321 22:36:10.888067   79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
	I0321 22:36:10.888330   79998 main.go:141] libmachine: Successfully made call to close driver server
	I0321 22:36:10.888353   79998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0321 22:36:10.888359   79998 main.go:141] libmachine: (test-preload-778713) DBG | Closing plugin on server side
	I0321 22:36:10.888371   79998 main.go:141] libmachine: Making call to close driver server
	I0321 22:36:10.888383   79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
	I0321 22:36:10.888603   79998 main.go:141] libmachine: Successfully made call to close driver server
	I0321 22:36:10.888620   79998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0321 22:36:10.965700   79998 main.go:141] libmachine: Making call to close driver server
	I0321 22:36:10.965724   79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
	I0321 22:36:10.966018   79998 main.go:141] libmachine: Successfully made call to close driver server
	I0321 22:36:10.966038   79998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0321 22:36:10.966065   79998 main.go:141] libmachine: Making call to close driver server
	I0321 22:36:10.966075   79998 main.go:141] libmachine: (test-preload-778713) Calling .Close
	I0321 22:36:10.966137   79998 main.go:141] libmachine: (test-preload-778713) DBG | Closing plugin on server side
	I0321 22:36:10.966306   79998 main.go:141] libmachine: Successfully made call to close driver server
	I0321 22:36:10.966324   79998 main.go:141] libmachine: Making call to close connection to plugin binary
	I0321 22:36:10.966326   79998 main.go:141] libmachine: (test-preload-778713) DBG | Closing plugin on server side
	I0321 22:36:10.968749   79998 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0321 22:36:10.970172   79998 addons.go:499] enable addons completed in 1.040575575s: enabled=[default-storageclass storage-provisioner]
	I0321 22:36:11.111117   79998 pod_ready.go:92] pod "etcd-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:11.111133   79998 pod_ready.go:81] duration metric: took 397.751491ms waiting for pod "etcd-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:11.111142   79998 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:11.512217   79998 pod_ready.go:92] pod "kube-apiserver-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:11.512237   79998 pod_ready.go:81] duration metric: took 401.08831ms waiting for pod "kube-apiserver-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:11.512247   79998 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:11.911787   79998 pod_ready.go:92] pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:11.911808   79998 pod_ready.go:81] duration metric: took 399.554216ms waiting for pod "kube-controller-manager-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:11.911818   79998 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-vdrfz" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:12.311785   79998 pod_ready.go:92] pod "kube-proxy-vdrfz" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:12.311807   79998 pod_ready.go:81] duration metric: took 399.98271ms waiting for pod "kube-proxy-vdrfz" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:12.311817   79998 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:12.712237   79998 pod_ready.go:92] pod "kube-scheduler-test-preload-778713" in "kube-system" namespace has status "Ready":"True"
	I0321 22:36:12.712258   79998 pod_ready.go:81] duration metric: took 400.435232ms waiting for pod "kube-scheduler-test-preload-778713" in "kube-system" namespace to be "Ready" ...
	I0321 22:36:12.712269   79998 pod_ready.go:38] duration metric: took 2.600726468s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0321 22:36:12.712291   79998 api_server.go:51] waiting for apiserver process to appear ...
	I0321 22:36:12.712332   79998 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:36:12.726796   79998 api_server.go:71] duration metric: took 2.79318534s to wait for apiserver process to appear ...
	I0321 22:36:12.726828   79998 api_server.go:87] waiting for apiserver healthz status ...
	I0321 22:36:12.726848   79998 api_server.go:252] Checking apiserver healthz at https://192.168.39.129:8443/healthz ...
	I0321 22:36:12.732331   79998 api_server.go:278] https://192.168.39.129:8443/healthz returned 200:
	ok
	I0321 22:36:12.733373   79998 api_server.go:140] control plane version: v1.24.4
	I0321 22:36:12.733390   79998 api_server.go:130] duration metric: took 6.556349ms to wait for apiserver health ...
	I0321 22:36:12.733397   79998 system_pods.go:43] waiting for kube-system pods to appear ...
	I0321 22:36:12.914698   79998 system_pods.go:59] 7 kube-system pods found
	I0321 22:36:12.914768   79998 system_pods.go:61] "coredns-6d4b75cb6d-4zkrg" [9ba80daf-32d4-41a3-a1bd-7c8b3168a4db] Running
	I0321 22:36:12.914789   79998 system_pods.go:61] "etcd-test-preload-778713" [ceeb8dba-f8d6-4d4b-ae99-3f8295266274] Running
	I0321 22:36:12.914796   79998 system_pods.go:61] "kube-apiserver-test-preload-778713" [518a0d87-b51c-443f-8542-75e44a061897] Running
	I0321 22:36:12.914804   79998 system_pods.go:61] "kube-controller-manager-test-preload-778713" [e5ef86be-1e24-4dd4-8934-d0c609c733f4] Running
	I0321 22:36:12.914810   79998 system_pods.go:61] "kube-proxy-vdrfz" [42f3e5be-8516-465e-8d63-949a1de4a66d] Running
	I0321 22:36:12.914816   79998 system_pods.go:61] "kube-scheduler-test-preload-778713" [932e8280-bfba-4a2d-912c-374f30a8cc37] Running
	I0321 22:36:12.914824   79998 system_pods.go:61] "storage-provisioner" [15af5481-be73-4e4b-8d93-f78926fa2edf] Running
	I0321 22:36:12.914833   79998 system_pods.go:74] duration metric: took 181.42948ms to wait for pod list to return data ...
	I0321 22:36:12.914853   79998 default_sa.go:34] waiting for default service account to be created ...
	I0321 22:36:13.112376   79998 default_sa.go:45] found service account: "default"
	I0321 22:36:13.112410   79998 default_sa.go:55] duration metric: took 197.549527ms for default service account to be created ...
	I0321 22:36:13.112422   79998 system_pods.go:116] waiting for k8s-apps to be running ...
	I0321 22:36:13.314614   79998 system_pods.go:86] 7 kube-system pods found
	I0321 22:36:13.314643   79998 system_pods.go:89] "coredns-6d4b75cb6d-4zkrg" [9ba80daf-32d4-41a3-a1bd-7c8b3168a4db] Running
	I0321 22:36:13.314650   79998 system_pods.go:89] "etcd-test-preload-778713" [ceeb8dba-f8d6-4d4b-ae99-3f8295266274] Running
	I0321 22:36:13.314654   79998 system_pods.go:89] "kube-apiserver-test-preload-778713" [518a0d87-b51c-443f-8542-75e44a061897] Running
	I0321 22:36:13.314659   79998 system_pods.go:89] "kube-controller-manager-test-preload-778713" [e5ef86be-1e24-4dd4-8934-d0c609c733f4] Running
	I0321 22:36:13.314663   79998 system_pods.go:89] "kube-proxy-vdrfz" [42f3e5be-8516-465e-8d63-949a1de4a66d] Running
	I0321 22:36:13.314667   79998 system_pods.go:89] "kube-scheduler-test-preload-778713" [932e8280-bfba-4a2d-912c-374f30a8cc37] Running
	I0321 22:36:13.314671   79998 system_pods.go:89] "storage-provisioner" [15af5481-be73-4e4b-8d93-f78926fa2edf] Running
	I0321 22:36:13.314678   79998 system_pods.go:126] duration metric: took 202.250278ms to wait for k8s-apps to be running ...
	I0321 22:36:13.314684   79998 system_svc.go:44] waiting for kubelet service to be running ....
	I0321 22:36:13.314746   79998 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:36:13.329327   79998 system_svc.go:56] duration metric: took 14.630148ms WaitForService to wait for kubelet.
	I0321 22:36:13.329356   79998 kubeadm.go:578] duration metric: took 3.395753535s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0321 22:36:13.329373   79998 node_conditions.go:102] verifying NodePressure condition ...
	I0321 22:36:13.512138   79998 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0321 22:36:13.512167   79998 node_conditions.go:123] node cpu capacity is 2
	I0321 22:36:13.512180   79998 node_conditions.go:105] duration metric: took 182.80253ms to run NodePressure ...
	I0321 22:36:13.512194   79998 start.go:228] waiting for startup goroutines ...
	I0321 22:36:13.512203   79998 start.go:233] waiting for cluster config update ...
	I0321 22:36:13.512215   79998 start.go:242] writing updated cluster config ...
	I0321 22:36:13.512488   79998 ssh_runner.go:195] Run: rm -f paused
	I0321 22:36:13.563797   79998 start.go:554] kubectl: 1.26.3, cluster: 1.24.4 (minor skew: 2)
	I0321 22:36:13.566354   79998 out.go:177] 
	W0321 22:36:13.568003   79998 out.go:239] ! /usr/local/bin/kubectl is version 1.26.3, which may have incompatibilities with Kubernetes 1.24.4.
	I0321 22:36:13.569606   79998 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0321 22:36:13.571390   79998 out.go:177] * Done! kubectl is now configured to use "test-preload-778713" cluster and "default" namespace by default
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID
	df1913d11540e       6e38f40d628db       13 seconds ago       Running             storage-provisioner       2                   34b0462712c65
	0f12728de096e       7a53d1e08ef58       39 seconds ago       Running             kube-proxy                1                   7228cd6b24fb9
	e94509bc97bb8       a4ca41631cc7a       40 seconds ago       Running             coredns                   1                   1275e00773946
	2dea0b199e13b       6e38f40d628db       44 seconds ago       Exited              storage-provisioner       1                   34b0462712c65
	a194d126ab9a4       1f99cb6da9a82       About a minute ago   Running             kube-controller-manager   3                   32219b621e38c
	f7f98bc5b364e       6cab9d1bed1be       About a minute ago   Running             kube-apiserver            2                   1b151d4da505f
	a78d6bfd8f6b7       aebe758cef4cd       About a minute ago   Running             etcd                      1                   d47aa1bedc931
	c8312b60e7fce       03fa22539fc1c       About a minute ago   Running             kube-scheduler            1                   20be83637ffe5
	ec92b2c00d9b2       6cab9d1bed1be       About a minute ago   Exited              kube-apiserver            1                   1b151d4da505f
	e44bf4ae4d833       1f99cb6da9a82       2 minutes ago        Exited              kube-controller-manager   2                   32219b621e38c
	
	* 
	* ==> containerd <==
	* -- Journal begins at Tue 2023-03-21 22:33:26 UTC, ends at Tue 2023-03-21 22:36:14 UTC. --
	Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.744471769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.744482983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.744929648Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7228cd6b24fb98a717c3d424641f2941617e641dc5512e0fa13c2973d7497ef4 pid=1580 runtime=io.containerd.runc.v2
	Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.851124326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vdrfz,Uid:42f3e5be-8516-465e-8d63-949a1de4a66d,Namespace:kube-system,Attempt:0,} returns sandbox id \"7228cd6b24fb98a717c3d424641f2941617e641dc5512e0fa13c2973d7497ef4\""
	Mar 21 22:35:19 test-preload-778713 containerd[632]: time="2023-03-21T22:35:19.904554625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6d4b75cb6d-4zkrg,Uid:9ba80daf-32d4-41a3-a1bd-7c8b3168a4db,Namespace:kube-system,Attempt:0,} returns sandbox id \"1275e00773946cb87910f4ca87357e11a09502fb1fa490ab80c223995fffbd17\""
	Mar 21 22:35:30 test-preload-778713 containerd[632]: time="2023-03-21T22:35:30.567436570Z" level=info msg="CreateContainer within sandbox \"34b0462712c65fef060756cd10c7b3fbff8e9eeec06448dee53e8cb50d9cd270\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:1,}"
	Mar 21 22:35:30 test-preload-778713 containerd[632]: time="2023-03-21T22:35:30.601721750Z" level=info msg="CreateContainer within sandbox \"34b0462712c65fef060756cd10c7b3fbff8e9eeec06448dee53e8cb50d9cd270\" for &ContainerMetadata{Name:storage-provisioner,Attempt:1,} returns container id \"2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380\""
	Mar 21 22:35:30 test-preload-778713 containerd[632]: time="2023-03-21T22:35:30.602872726Z" level=info msg="StartContainer for \"2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380\""
	Mar 21 22:35:30 test-preload-778713 containerd[632]: time="2023-03-21T22:35:30.683003584Z" level=info msg="StartContainer for \"2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380\" returns successfully"
	Mar 21 22:35:34 test-preload-778713 containerd[632]: time="2023-03-21T22:35:34.563102766Z" level=info msg="CreateContainer within sandbox \"1275e00773946cb87910f4ca87357e11a09502fb1fa490ab80c223995fffbd17\" for container &ContainerMetadata{Name:coredns,Attempt:1,}"
	Mar 21 22:35:34 test-preload-778713 containerd[632]: time="2023-03-21T22:35:34.594039077Z" level=info msg="CreateContainer within sandbox \"1275e00773946cb87910f4ca87357e11a09502fb1fa490ab80c223995fffbd17\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"e94509bc97bb8813165cc35eeea89ef83092abc5402cf68fff700bd290208087\""
	Mar 21 22:35:34 test-preload-778713 containerd[632]: time="2023-03-21T22:35:34.595507604Z" level=info msg="StartContainer for \"e94509bc97bb8813165cc35eeea89ef83092abc5402cf68fff700bd290208087\""
	Mar 21 22:35:34 test-preload-778713 containerd[632]: time="2023-03-21T22:35:34.670423789Z" level=info msg="StartContainer for \"e94509bc97bb8813165cc35eeea89ef83092abc5402cf68fff700bd290208087\" returns successfully"
	Mar 21 22:35:35 test-preload-778713 containerd[632]: time="2023-03-21T22:35:35.562911796Z" level=info msg="CreateContainer within sandbox \"7228cd6b24fb98a717c3d424641f2941617e641dc5512e0fa13c2973d7497ef4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:1,}"
	Mar 21 22:35:35 test-preload-778713 containerd[632]: time="2023-03-21T22:35:35.597063027Z" level=info msg="CreateContainer within sandbox \"7228cd6b24fb98a717c3d424641f2941617e641dc5512e0fa13c2973d7497ef4\" for &ContainerMetadata{Name:kube-proxy,Attempt:1,} returns container id \"0f12728de096e667d674a130e2a3f8da71dd487711f3760da7c4fc971840a61e\""
	Mar 21 22:35:35 test-preload-778713 containerd[632]: time="2023-03-21T22:35:35.598152367Z" level=info msg="StartContainer for \"0f12728de096e667d674a130e2a3f8da71dd487711f3760da7c4fc971840a61e\""
	Mar 21 22:35:35 test-preload-778713 containerd[632]: time="2023-03-21T22:35:35.681233194Z" level=info msg="StartContainer for \"0f12728de096e667d674a130e2a3f8da71dd487711f3760da7c4fc971840a61e\" returns successfully"
	Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.768385402Z" level=info msg="shim disconnected" id=2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380
	Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.768850813Z" level=warning msg="cleaning up after shim disconnected" id=2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380 namespace=k8s.io
	Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.768930663Z" level=info msg="cleaning up dead shim"
	Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.782029557Z" level=warning msg="cleanup warnings time=\"2023-03-21T22:36:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1811 runtime=io.containerd.runc.v2\n"
	Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.903366391Z" level=info msg="CreateContainer within sandbox \"34b0462712c65fef060756cd10c7b3fbff8e9eeec06448dee53e8cb50d9cd270\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:2,}"
	Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.930968669Z" level=info msg="CreateContainer within sandbox \"34b0462712c65fef060756cd10c7b3fbff8e9eeec06448dee53e8cb50d9cd270\" for &ContainerMetadata{Name:storage-provisioner,Attempt:2,} returns container id \"df1913d11540e787c62cdc0bbf163830e6e55ce53408e6dfd9a8e30ff343be7f\""
	Mar 21 22:36:00 test-preload-778713 containerd[632]: time="2023-03-21T22:36:00.932718936Z" level=info msg="StartContainer for \"df1913d11540e787c62cdc0bbf163830e6e55ce53408e6dfd9a8e30ff343be7f\""
	Mar 21 22:36:01 test-preload-778713 containerd[632]: time="2023-03-21T22:36:01.041718040Z" level=info msg="StartContainer for \"df1913d11540e787c62cdc0bbf163830e6e55ce53408e6dfd9a8e30ff343be7f\" returns successfully"
	
	* 
	* ==> coredns [e94509bc97bb8813165cc35eeea89ef83092abc5402cf68fff700bd290208087] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:46870 - 17389 "HINFO IN 7495271272311143749.5675818494558930897. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02778266s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-778713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-778713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8b6238450160ebd3d5010da9938125282f0eedd4
	                    minikube.k8s.io/name=test-preload-778713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_03_21T22_30_44_0700
	                    minikube.k8s.io/version=v1.29.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 21 Mar 2023 22:30:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-778713
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 21 Mar 2023 22:36:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 21 Mar 2023 22:35:11 +0000   Tue, 21 Mar 2023 22:30:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 21 Mar 2023 22:35:11 +0000   Tue, 21 Mar 2023 22:30:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 21 Mar 2023 22:35:11 +0000   Tue, 21 Mar 2023 22:30:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 21 Mar 2023 22:35:11 +0000   Tue, 21 Mar 2023 22:35:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.129
	  Hostname:    test-preload-778713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1673b7de8cd4a05a5e2677840a0d26a
	  System UUID:                a1673b7d-e8cd-4a05-a5e2-677840a0d26a
	  Boot ID:                    a53f9495-c984-4ebe-8894-b22bef74aacb
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://1.6.19
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-4zkrg                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     5m17s
	  kube-system                 etcd-test-preload-778713                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-apiserver-test-preload-778713             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 kube-controller-manager-test-preload-778713    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 kube-proxy-vdrfz                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m18s
	  kube-system                 kube-scheduler-test-preload-778713             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 38s                    kube-proxy       
	  Normal  Starting                 5m15s                  kube-proxy       
	  Normal  Starting                 5m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m30s                  kubelet          Node test-preload-778713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m30s                  kubelet          Node test-preload-778713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m30s                  kubelet          Node test-preload-778713 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m20s                  kubelet          Node test-preload-778713 status is now: NodeReady
	  Normal  RegisteredNode           5m18s                  node-controller  Node test-preload-778713 event: Registered Node test-preload-778713 in Controller
	  Normal  Starting                 2m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m14s (x8 over 2m14s)  kubelet          Node test-preload-778713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m14s (x8 over 2m14s)  kubelet          Node test-preload-778713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m14s (x7 over 2m14s)  kubelet          Node test-preload-778713 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                    node-controller  Node test-preload-778713 event: Registered Node test-preload-778713 in Controller
	
	* 
	* ==> dmesg <==
	* [Mar21 22:33] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.070350] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +3.930656] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.187266] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.138912] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.498163] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000000] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +15.211172] systemd-fstab-generator[528]: Ignoring "noauto" for root device
	[  +2.839850] systemd-fstab-generator[560]: Ignoring "noauto" for root device
	[  +0.097646] systemd-fstab-generator[571]: Ignoring "noauto" for root device
	[  +0.125241] systemd-fstab-generator[584]: Ignoring "noauto" for root device
	[  +0.103923] systemd-fstab-generator[595]: Ignoring "noauto" for root device
	[  +0.235112] systemd-fstab-generator[623]: Ignoring "noauto" for root device
	[Mar21 22:34] systemd-fstab-generator[817]: Ignoring "noauto" for root device
	[Mar21 22:35] kauditd_printk_skb: 7 callbacks suppressed
	[Mar21 22:36] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [a78d6bfd8f6b70c4cef58d78d32f23bd19cf582a21afe03737e4eb8782330c4e] <==
	* {"level":"info","ts":"2023-03-21T22:34:39.960Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"245a8df1c58de0e1","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-03-21T22:34:39.961Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-03-21T22:34:39.961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 switched to configuration voters=(2619562202810409185)"}
	{"level":"info","ts":"2023-03-21T22:34:39.962Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","added-peer-id":"245a8df1c58de0e1","added-peer-peer-urls":["https://192.168.39.129:2380"]}
	{"level":"info","ts":"2023-03-21T22:34:39.962Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a2af9788ad7a361f","local-member-id":"245a8df1c58de0e1","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:34:39.962Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-03-21T22:34:39.964Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-03-21T22:34:39.964Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"245a8df1c58de0e1","initial-advertise-peer-urls":["https://192.168.39.129:2380"],"listen-peer-urls":["https://192.168.39.129:2380"],"advertise-client-urls":["https://192.168.39.129:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.129:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-03-21T22:34:39.965Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-03-21T22:34:39.965Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2023-03-21T22:34:39.965Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.129:2380"}
	{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 is starting a new election at term 2"}
	{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgPreVoteResp from 245a8df1c58de0e1 at term 2"}
	{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became candidate at term 3"}
	{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 received MsgVoteResp from 245a8df1c58de0e1 at term 3"}
	{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"245a8df1c58de0e1 became leader at term 3"}
	{"level":"info","ts":"2023-03-21T22:34:41.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 245a8df1c58de0e1 elected leader 245a8df1c58de0e1 at term 3"}
	{"level":"info","ts":"2023-03-21T22:34:41.045Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"245a8df1c58de0e1","local-member-attributes":"{Name:test-preload-778713 ClientURLs:[https://192.168.39.129:2379]}","request-path":"/0/members/245a8df1c58de0e1/attributes","cluster-id":"a2af9788ad7a361f","publish-timeout":"7s"}
	{"level":"info","ts":"2023-03-21T22:34:41.046Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-21T22:34:41.047Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.129:2379"}
	{"level":"info","ts":"2023-03-21T22:34:41.047Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-03-21T22:34:41.048Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-03-21T22:34:41.048Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-03-21T22:34:41.049Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  22:36:14 up 2 min,  0 users,  load average: 0.25, 0.12, 0.04
	Linux test-preload-778713 5.10.57 #1 SMP Fri Mar 17 22:07:25 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ec92b2c00d9b294ce2548a83156bfd4288871592390c53319264be24089b8547] <==
	* I0321 22:34:16.333267       1 server.go:558] external host was not specified, using 192.168.39.129
	I0321 22:34:16.333999       1 server.go:158] Version: v1.24.4
	I0321 22:34:16.334046       1 server.go:160] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0321 22:34:16.574237       1 shared_informer.go:255] Waiting for caches to sync for node_authorizer
	I0321 22:34:16.575299       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0321 22:34:16.575311       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	I0321 22:34:16.576565       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0321 22:34:16.576580       1 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
	W0321 22:34:16.579901       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:17.575339       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:17.580402       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:18.575954       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:19.354467       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:20.140720       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:22.356961       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:23.013256       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:25.725917       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:27.067125       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:31.716202       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W0321 22:34:34.692458       1 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	E0321 22:34:36.580548       1 run.go:74] "command failed" err="context deadline exceeded"
	
	* 
	* ==> kube-apiserver [f7f98bc5b364e8412b5e83481e4b5d55058cffbe702f2006c5ccf0c57069baad] <==
	* I0321 22:35:01.112353       1 controller.go:85] Starting OpenAPI V3 controller
	I0321 22:35:01.112368       1 naming_controller.go:291] Starting NamingConditionController
	I0321 22:35:01.112518       1 establishing_controller.go:76] Starting EstablishingController
	I0321 22:35:01.112525       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0321 22:35:01.112530       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0321 22:35:01.112538       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0321 22:35:01.080856       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0321 22:35:01.167498       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0321 22:35:01.173613       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0321 22:35:01.178923       1 cache.go:39] Caches are synced for autoregister controller
	I0321 22:35:01.180585       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0321 22:35:01.181121       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0321 22:35:01.181384       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I0321 22:35:01.202829       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0321 22:35:01.221649       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0321 22:35:01.730404       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0321 22:35:02.087960       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0321 22:35:02.697688       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0321 22:35:02.706971       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0321 22:35:02.742116       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0321 22:35:02.764510       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0321 22:35:02.772333       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0321 22:35:20.110317       1 controller.go:611] quota admission added evaluator for: endpoints
	I0321 22:35:20.113261       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0321 22:35:35.860864       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	
	* 
	* ==> kube-controller-manager [a194d126ab9a4284f87c9770ea07108020456ecce39d1fbaca418d27f321c62d] <==
	* I0321 22:35:13.962711       1 shared_informer.go:262] Caches are synced for persistent volume
	I0321 22:35:13.962666       1 shared_informer.go:262] Caches are synced for HPA
	I0321 22:35:13.964984       1 shared_informer.go:262] Caches are synced for cronjob
	I0321 22:35:13.971265       1 shared_informer.go:262] Caches are synced for attach detach
	I0321 22:35:13.991478       1 shared_informer.go:262] Caches are synced for taint
	I0321 22:35:13.991710       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0321 22:35:13.991936       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0321 22:35:13.992217       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-778713. Assuming now as a timestamp.
	I0321 22:35:13.992277       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0321 22:35:13.992830       1 event.go:294] "Event occurred" object="test-preload-778713" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-778713 event: Registered Node test-preload-778713 in Controller"
	I0321 22:35:14.001672       1 shared_informer.go:262] Caches are synced for TTL
	I0321 22:35:14.002965       1 shared_informer.go:262] Caches are synced for node
	I0321 22:35:14.003101       1 range_allocator.go:173] Starting range CIDR allocator
	I0321 22:35:14.003305       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I0321 22:35:14.003321       1 shared_informer.go:262] Caches are synced for cidrallocator
	I0321 22:35:14.097198       1 shared_informer.go:262] Caches are synced for namespace
	I0321 22:35:14.103920       1 shared_informer.go:262] Caches are synced for resource quota
	I0321 22:35:14.107323       1 shared_informer.go:262] Caches are synced for service account
	I0321 22:35:14.113713       1 shared_informer.go:262] Caches are synced for stateful set
	I0321 22:35:14.155177       1 shared_informer.go:262] Caches are synced for disruption
	I0321 22:35:14.155210       1 disruption.go:371] Sending events to api server.
	I0321 22:35:14.161801       1 shared_informer.go:262] Caches are synced for resource quota
	I0321 22:35:14.607471       1 shared_informer.go:262] Caches are synced for garbage collector
	I0321 22:35:14.612841       1 shared_informer.go:262] Caches are synced for garbage collector
	I0321 22:35:14.612937       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	
	* 
	* ==> kube-controller-manager [e44bf4ae4d83323c8294d825664087f954c343c35d2b33be081be33a5efbbea5] <==
	* 	vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:190 +0x2f6
	k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run.func1()
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:165 +0x3c
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x3931a60?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x3e
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x4d010e0, 0xc000748a20}, 0x1, 0xc000102360)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0xdf8475800, 0x0, 0xa0?, 0xc00006a7d0?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
	k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x4d2abb0?, 0xc000622980?, 0xc00078b860?)
		vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25
	created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run
		vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:164 +0x372
	
	goroutine 144 [syscall]:
	syscall.Syscall6(0xe8, 0xd, 0xc000a8fc14, 0x7, 0xffffffffffffffff, 0x0, 0x0)
		/usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5
	k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0x99b17e83b43979f2?, {0xc000a8fc14?, 0xab082cace494b7fc?, 0x5a594ffa9574f6ca?}, 0xc52e66829fa87e8b?)
		vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:56 +0x58
	k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc00065f3e0)
		vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d
	k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0000b6730)
		vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x26e
	created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
		vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c5
	
	* 
	* ==> kube-proxy [0f12728de096e667d674a130e2a3f8da71dd487711f3760da7c4fc971840a61e] <==
	* I0321 22:35:35.776316       1 node.go:163] Successfully retrieved node IP: 192.168.39.129
	I0321 22:35:35.776700       1 server_others.go:138] "Detected node IP" address="192.168.39.129"
	I0321 22:35:35.777013       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0321 22:35:35.843632       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0321 22:35:35.843675       1 server_others.go:206] "Using iptables Proxier"
	I0321 22:35:35.843699       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0321 22:35:35.844363       1 server.go:661] "Version info" version="v1.24.4"
	I0321 22:35:35.844398       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0321 22:35:35.847666       1 config.go:444] "Starting node config controller"
	I0321 22:35:35.847705       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0321 22:35:35.848574       1 config.go:317] "Starting service config controller"
	I0321 22:35:35.848635       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0321 22:35:35.850814       1 config.go:226] "Starting endpoint slice config controller"
	I0321 22:35:35.850853       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0321 22:35:35.949306       1 shared_informer.go:262] Caches are synced for node config
	I0321 22:35:35.950485       1 shared_informer.go:262] Caches are synced for service config
	I0321 22:35:35.951681       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [c8312b60e7fceab9785451247e3cbf4e2e56d9b90b7debd653ddb6dbb7804226] <==
	* W0321 22:34:53.008339       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: Get "https://192.168.39.129:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:53.008376       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.129:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:54.064408       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.129:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:54.064435       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.39.129:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:54.334229       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:54.334259       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:54.663596       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: Get "https://192.168.39.129:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:54.663635       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.39.129:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:55.127563       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: Get "https://192.168.39.129:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:55.127622       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.129:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:56.073617       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: Get "https://192.168.39.129:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:56.073679       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.39.129:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:56.166996       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: Get "https://192.168.39.129:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:56.167019       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.39.129:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:56.926152       1 reflector.go:324] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.129:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:56.926185       1 reflector.go:138] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.129:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:57.178450       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:57.178489       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:57.217895       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:57.217951       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.129:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:34:57.842092       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: Get "https://192.168.39.129:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	E0321 22:34:57.842148       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.39.129:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.39.129:8443: connect: connection refused
	W0321 22:35:01.130168       1 reflector.go:324] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0321 22:35:01.130222       1 reflector.go:138] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0321 22:35:18.505084       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-03-21 22:33:26 UTC, ends at Tue 2023-03-21 22:36:15 UTC. --
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.496843     823 topology_manager.go:200] "Topology Admit Handler"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.497188     823 topology_manager.go:200] "Topology Admit Handler"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.497386     823 topology_manager.go:200] "Topology Admit Handler"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605070     823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/42f3e5be-8516-465e-8d63-949a1de4a66d-kube-proxy\") pod \"kube-proxy-vdrfz\" (UID: \"42f3e5be-8516-465e-8d63-949a1de4a66d\") " pod="kube-system/kube-proxy-vdrfz"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605206     823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42f3e5be-8516-465e-8d63-949a1de4a66d-xtables-lock\") pod \"kube-proxy-vdrfz\" (UID: \"42f3e5be-8516-465e-8d63-949a1de4a66d\") " pod="kube-system/kube-proxy-vdrfz"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605256     823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42f3e5be-8516-465e-8d63-949a1de4a66d-lib-modules\") pod \"kube-proxy-vdrfz\" (UID: \"42f3e5be-8516-465e-8d63-949a1de4a66d\") " pod="kube-system/kube-proxy-vdrfz"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605359     823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtdq4\" (UniqueName: \"kubernetes.io/projected/9ba80daf-32d4-41a3-a1bd-7c8b3168a4db-kube-api-access-gtdq4\") pod \"coredns-6d4b75cb6d-4zkrg\" (UID: \"9ba80daf-32d4-41a3-a1bd-7c8b3168a4db\") " pod="kube-system/coredns-6d4b75cb6d-4zkrg"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605427     823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnx4w\" (UniqueName: \"kubernetes.io/projected/42f3e5be-8516-465e-8d63-949a1de4a66d-kube-api-access-hnx4w\") pod \"kube-proxy-vdrfz\" (UID: \"42f3e5be-8516-465e-8d63-949a1de4a66d\") " pod="kube-system/kube-proxy-vdrfz"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605452     823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba80daf-32d4-41a3-a1bd-7c8b3168a4db-config-volume\") pod \"coredns-6d4b75cb6d-4zkrg\" (UID: \"9ba80daf-32d4-41a3-a1bd-7c8b3168a4db\") " pod="kube-system/coredns-6d4b75cb6d-4zkrg"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605470     823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/15af5481-be73-4e4b-8d93-f78926fa2edf-tmp\") pod \"storage-provisioner\" (UID: \"15af5481-be73-4e4b-8d93-f78926fa2edf\") " pod="kube-system/storage-provisioner"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605496     823 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xsj8\" (UniqueName: \"kubernetes.io/projected/15af5481-be73-4e4b-8d93-f78926fa2edf-kube-api-access-5xsj8\") pod \"storage-provisioner\" (UID: \"15af5481-be73-4e4b-8d93-f78926fa2edf\") " pod="kube-system/storage-provisioner"
	Mar 21 22:35:18 test-preload-778713 kubelet[823]: I0321 22:35:18.605516     823 reconciler.go:159] "Reconciler: start to sync state"
	Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.564013     823 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5xsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(15af5481-be73-4e4b-8d93-
f78926fa2edf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.564052     823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID=15af5481-be73-4e4b-8d93-f78926fa2edf
	Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.776190     823 kuberuntime_manager.go:905] container &Container{Name:storage-provisioner,Image:gcr.io/k8s-minikube/storage-provisioner:v5,Command:[/storage-provisioner],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5xsj8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod storage-provisioner_kube-system(15af5481-be73-4e4b-8d93-
f78926fa2edf): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.776223     823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/storage-provisioner" podUID=15af5481-be73-4e4b-8d93-f78926fa2edf
	Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.853641     823 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-a
ccess-hnx4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vdrfz_kube-system(42f3e5be-8516-465e-8d63-949a1de4a66d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.853676     823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-vdrfz" podUID=42f3e5be-8516-465e-8d63-949a1de4a66d
	Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.906178     823 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gtdq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-4zkrg_kube-system(9ba80daf-32d4-41a3-a1bd-7c8b3168a4db): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 21 22:35:19 test-preload-778713 kubelet[823]: E0321 22:35:19.906220     823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6d4b75cb6d-4zkrg" podUID=9ba80daf-32d4-41a3-a1bd-7c8b3168a4db
	Mar 21 22:35:20 test-preload-778713 kubelet[823]: E0321 22:35:20.780711     823 kuberuntime_manager.go:905] container &Container{Name:kube-proxy,Image:k8s.gcr.io/kube-proxy:v1.24.4,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-a
ccess-hnx4w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-vdrfz_kube-system(42f3e5be-8516-465e-8d63-949a1de4a66d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 21 22:35:20 test-preload-778713 kubelet[823]: E0321 22:35:20.780839     823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-vdrfz" podUID=42f3e5be-8516-465e-8d63-949a1de4a66d
	Mar 21 22:35:20 test-preload-778713 kubelet[823]: E0321 22:35:20.782930     823 kuberuntime_manager.go:905] container &Container{Name:coredns,Image:k8s.gcr.io/coredns/coredns:v1.8.6,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {<nil>} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{73400320 0} {<nil>} 70Mi BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-gtdq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},
},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:fal
se,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod coredns-6d4b75cb6d-4zkrg_kube-system(9ba80daf-32d4-41a3-a1bd-7c8b3168a4db): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars
	Mar 21 22:35:20 test-preload-778713 kubelet[823]: E0321 22:35:20.783166     823 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/coredns-6d4b75cb6d-4zkrg" podUID=9ba80daf-32d4-41a3-a1bd-7c8b3168a4db
	Mar 21 22:36:00 test-preload-778713 kubelet[823]: I0321 22:36:00.887239     823 scope.go:110] "RemoveContainer" containerID="2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380"
	
	* 
	* ==> storage-provisioner [2dea0b199e13ba7bdf75f9adbb94ce0f50a730e1c2cae134c512f297eb17a380] <==
	* I0321 22:35:30.727052       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0321 22:36:00.732949       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [df1913d11540e787c62cdc0bbf163830e6e55ce53408e6dfd9a8e30ff343be7f] <==
	* I0321 22:36:01.069970       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0321 22:36:01.103329       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0321 22:36:01.103417       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-778713 -n test-preload-778713
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-778713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-778713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-778713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-778713: (1.171112539s)
--- FAIL: TestPreload (392.90s)

                                                
                                    

Test pass (262/297)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 6.68
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.26.2/json-events 6.55
11 TestDownloadOnly/v1.26.2/preload-exists 0
15 TestDownloadOnly/v1.26.2/LogsDuration 0.06
16 TestDownloadOnly/DeleteAll 0.35
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.34
19 TestBinaryMirror 0.62
20 TestOffline 79.69
22 TestAddons/Setup 116.27
24 TestAddons/parallel/Registry 16.49
25 TestAddons/parallel/Ingress 29.56
26 TestAddons/parallel/MetricsServer 5.89
27 TestAddons/parallel/HelmTiller 11.49
29 TestAddons/parallel/CSI 67.29
30 TestAddons/parallel/Headlamp 10.3
31 TestAddons/parallel/CloudSpanner 5.41
34 TestAddons/serial/GCPAuth/Namespaces 0.13
35 TestAddons/StoppedEnableDisable 92.01
36 TestCertOptions 73.05
37 TestCertExpiration 291.58
39 TestForceSystemdFlag 94.98
40 TestForceSystemdEnv 81.65
41 TestKVMDriverInstallOrUpdate 2.96
45 TestErrorSpam/setup 54.69
46 TestErrorSpam/start 0.34
47 TestErrorSpam/status 0.79
48 TestErrorSpam/pause 1.38
49 TestErrorSpam/unpause 1.53
50 TestErrorSpam/stop 1.47
53 TestFunctional/serial/CopySyncFile 0
54 TestFunctional/serial/StartWithProxy 108.22
55 TestFunctional/serial/AuditLog 0
56 TestFunctional/serial/SoftStart 5.96
57 TestFunctional/serial/KubeContext 0.05
58 TestFunctional/serial/KubectlGetPods 0.09
61 TestFunctional/serial/CacheCmd/cache/add_remote 2.98
62 TestFunctional/serial/CacheCmd/cache/add_local 1.51
63 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.05
64 TestFunctional/serial/CacheCmd/cache/list 0.05
65 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.23
66 TestFunctional/serial/CacheCmd/cache/cache_reload 1.92
67 TestFunctional/serial/CacheCmd/cache/delete 0.1
68 TestFunctional/serial/MinikubeKubectlCmd 0.12
69 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
70 TestFunctional/serial/ExtraConfig 38.17
71 TestFunctional/serial/ComponentHealth 0.07
72 TestFunctional/serial/LogsCmd 1.31
73 TestFunctional/serial/LogsFileCmd 1.26
75 TestFunctional/parallel/ConfigCmd 0.36
76 TestFunctional/parallel/DashboardCmd 13.24
77 TestFunctional/parallel/DryRun 0.29
78 TestFunctional/parallel/InternationalLanguage 0.51
79 TestFunctional/parallel/StatusCmd 0.97
83 TestFunctional/parallel/ServiceCmdConnect 8.59
84 TestFunctional/parallel/AddonsCmd 0.18
85 TestFunctional/parallel/PersistentVolumeClaim 42.19
87 TestFunctional/parallel/SSHCmd 0.6
88 TestFunctional/parallel/CpCmd 0.95
89 TestFunctional/parallel/MySQL 27.55
90 TestFunctional/parallel/FileSync 0.23
91 TestFunctional/parallel/CertSync 1.48
95 TestFunctional/parallel/NodeLabels 0.07
97 TestFunctional/parallel/NonActiveRuntimeDisabled 0.45
99 TestFunctional/parallel/License 0.11
100 TestFunctional/parallel/ServiceCmd/DeployApp 11.23
101 TestFunctional/parallel/Version/short 0.05
102 TestFunctional/parallel/Version/components 0.75
103 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
104 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
105 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
106 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
107 TestFunctional/parallel/ImageCommands/ImageBuild 3.42
108 TestFunctional/parallel/ImageCommands/Setup 0.85
109 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
110 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
111 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
112 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.83
113 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.54
114 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.69
115 TestFunctional/parallel/ServiceCmd/List 0.32
116 TestFunctional/parallel/ServiceCmd/JSONOutput 0.31
117 TestFunctional/parallel/ServiceCmd/HTTPS 0.31
118 TestFunctional/parallel/ServiceCmd/Format 0.31
119 TestFunctional/parallel/ServiceCmd/URL 0.36
128 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.62
129 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
130 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.32
131 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.66
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
133 TestFunctional/parallel/MountCmd/any-port 8.1
134 TestFunctional/parallel/ProfileCmd/profile_list 0.34
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.33
136 TestFunctional/parallel/MountCmd/specific-port 1.63
137 TestFunctional/delete_addon-resizer_images 0.16
138 TestFunctional/delete_my-image_image 0.06
139 TestFunctional/delete_minikube_cached_images 0.06
143 TestIngressAddonLegacy/StartLegacyK8sCluster 110.56
145 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 10.76
146 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.35
147 TestIngressAddonLegacy/serial/ValidateIngressAddons 40.63
150 TestJSONOutput/start/Command 82.16
151 TestJSONOutput/start/Audit 0
153 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
154 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
156 TestJSONOutput/pause/Command 0.61
157 TestJSONOutput/pause/Audit 0
159 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
160 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
162 TestJSONOutput/unpause/Command 0.56
163 TestJSONOutput/unpause/Audit 0
165 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
166 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
168 TestJSONOutput/stop/Command 17.11
169 TestJSONOutput/stop/Audit 0
171 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
172 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
173 TestErrorJSONOutput 0.44
178 TestMainNoArgs 0.05
179 TestMinikubeProfile 109.57
182 TestMountStart/serial/StartWithMountFirst 26.81
183 TestMountStart/serial/VerifyMountFirst 0.39
184 TestMountStart/serial/StartWithMountSecond 27.59
185 TestMountStart/serial/VerifyMountSecond 0.37
186 TestMountStart/serial/DeleteFirst 0.9
187 TestMountStart/serial/VerifyMountPostDelete 0.37
188 TestMountStart/serial/Stop 1.15
189 TestMountStart/serial/RestartStopped 23.43
190 TestMountStart/serial/VerifyMountPostStop 0.38
193 TestMultiNode/serial/FreshStart2Nodes 141.88
194 TestMultiNode/serial/DeployApp2Nodes 3.68
195 TestMultiNode/serial/PingHostFrom2Pods 0.86
196 TestMultiNode/serial/AddNode 67.81
197 TestMultiNode/serial/ProfileList 0.25
198 TestMultiNode/serial/CopyFile 7.14
199 TestMultiNode/serial/StopNode 2.09
200 TestMultiNode/serial/StartAfterStop 130.5
201 TestMultiNode/serial/RestartKeepsNodes 536.12
202 TestMultiNode/serial/DeleteNode 2.07
203 TestMultiNode/serial/StopMultiNode 183.35
204 TestMultiNode/serial/RestartMultiNode 237.59
205 TestMultiNode/serial/ValidateNameConflict 57.7
212 TestScheduledStopUnix 131.2
216 TestRunningBinaryUpgrade 156.8
218 TestKubernetesUpgrade 229.94
221 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
222 TestNoKubernetes/serial/StartWithK8s 132.46
223 TestStoppedBinaryUpgrade/Setup 0.52
224 TestStoppedBinaryUpgrade/Upgrade 220.04
225 TestNoKubernetes/serial/StartWithStopK8s 34.61
226 TestNoKubernetes/serial/Start 30.87
227 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
228 TestNoKubernetes/serial/ProfileList 6.7
229 TestNoKubernetes/serial/Stop 1.26
230 TestNoKubernetes/serial/StartNoArgs 35.14
231 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
240 TestPause/serial/Start 112.61
241 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
249 TestNetworkPlugins/group/false 3.84
254 TestStartStop/group/old-k8s-version/serial/FirstStart 319.82
256 TestStartStop/group/no-preload/serial/FirstStart 138.55
257 TestPause/serial/SecondStartNoReconfiguration 47.49
259 TestStartStop/group/embed-certs/serial/FirstStart 152.77
260 TestPause/serial/Pause 0.93
261 TestPause/serial/VerifyStatus 0.28
262 TestPause/serial/Unpause 0.82
263 TestPause/serial/PauseAgain 0.82
264 TestPause/serial/DeletePaused 1.18
265 TestPause/serial/VerifyDeletedResources 0.52
267 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.71
268 TestStartStop/group/no-preload/serial/DeployApp 9.46
269 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.44
270 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.9
271 TestStartStop/group/no-preload/serial/Stop 91.86
272 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.84
273 TestStartStop/group/default-k8s-diff-port/serial/Stop 91.81
274 TestStartStop/group/embed-certs/serial/DeployApp 9.33
275 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.75
276 TestStartStop/group/embed-certs/serial/Stop 91.7
277 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
278 TestStartStop/group/no-preload/serial/SecondStart 328.7
279 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
280 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 423.59
281 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.17
282 TestStartStop/group/embed-certs/serial/SecondStart 677.35
283 TestStartStop/group/old-k8s-version/serial/DeployApp 7.42
284 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.69
285 TestStartStop/group/old-k8s-version/serial/Stop 91.85
286 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
287 TestStartStop/group/old-k8s-version/serial/SecondStart 120.02
288 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 25.02
289 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
290 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
291 TestStartStop/group/old-k8s-version/serial/Pause 2.39
292 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
294 TestStartStop/group/newest-cni/serial/FirstStart 69.82
295 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
296 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
297 TestStartStop/group/no-preload/serial/Pause 2.77
298 TestNetworkPlugins/group/auto/Start 125.65
299 TestStartStop/group/newest-cni/serial/DeployApp 0
300 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
301 TestStartStop/group/newest-cni/serial/Stop 3.13
302 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
303 TestStartStop/group/newest-cni/serial/SecondStart 72.9
304 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 14.05
305 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
306 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
307 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.65
308 TestNetworkPlugins/group/kindnet/Start 80.06
309 TestNetworkPlugins/group/auto/KubeletFlags 0.21
310 TestNetworkPlugins/group/auto/NetCatPod 9.38
311 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
312 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
313 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
314 TestStartStop/group/newest-cni/serial/Pause 2.12
315 TestNetworkPlugins/group/auto/DNS 0.19
316 TestNetworkPlugins/group/auto/Localhost 0.14
317 TestNetworkPlugins/group/auto/HairPin 0.15
318 TestNetworkPlugins/group/calico/Start 98.84
319 TestNetworkPlugins/group/custom-flannel/Start 111.44
320 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
321 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
322 TestNetworkPlugins/group/kindnet/NetCatPod 10.43
323 TestNetworkPlugins/group/kindnet/DNS 0.19
324 TestNetworkPlugins/group/kindnet/Localhost 0.16
325 TestNetworkPlugins/group/kindnet/HairPin 0.13
326 TestNetworkPlugins/group/enable-default-cni/Start 71.96
327 TestNetworkPlugins/group/calico/ControllerPod 5.02
328 TestNetworkPlugins/group/calico/KubeletFlags 0.21
329 TestNetworkPlugins/group/calico/NetCatPod 12.38
330 TestNetworkPlugins/group/calico/DNS 0.17
331 TestNetworkPlugins/group/calico/Localhost 0.14
332 TestNetworkPlugins/group/calico/HairPin 0.14
333 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
334 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.44
335 TestNetworkPlugins/group/flannel/Start 91.88
336 TestNetworkPlugins/group/custom-flannel/DNS 0.18
337 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
338 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
339 TestNetworkPlugins/group/bridge/Start 130.44
340 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.21
341 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.33
342 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
343 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
344 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
345 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
346 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
347 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
348 TestStartStop/group/embed-certs/serial/Pause 2.41
349 TestNetworkPlugins/group/flannel/ControllerPod 5.02
350 TestNetworkPlugins/group/flannel/KubeletFlags 0.21
351 TestNetworkPlugins/group/flannel/NetCatPod 10.37
352 TestNetworkPlugins/group/flannel/DNS 0.16
353 TestNetworkPlugins/group/flannel/Localhost 0.16
354 TestNetworkPlugins/group/flannel/HairPin 0.14
355 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
356 TestNetworkPlugins/group/bridge/NetCatPod 10.29
357 TestNetworkPlugins/group/bridge/DNS 0.15
358 TestNetworkPlugins/group/bridge/Localhost 0.14
359 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.16.0/json-events (6.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931109 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931109 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.682471931s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (6.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931109
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931109: exit status 85 (66.18936ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931109 | jenkins | v1.29.0 | 21 Mar 23 21:49 UTC |          |
	|         | -p download-only-931109        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/21 21:49:27
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0321 21:49:27.643500   64510 out.go:296] Setting OutFile to fd 1 ...
	I0321 21:49:27.643635   64510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:49:27.643649   64510 out.go:309] Setting ErrFile to fd 2...
	I0321 21:49:27.643657   64510 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:49:27.643786   64510 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
	W0321 21:49:27.643907   64510 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16124-57437/.minikube/config/config.json: open /home/jenkins/minikube-integration/16124-57437/.minikube/config/config.json: no such file or directory
	I0321 21:49:27.644577   64510 out.go:303] Setting JSON to true
	I0321 21:49:27.645527   64510 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9118,"bootTime":1679426250,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 21:49:27.645591   64510 start.go:135] virtualization: kvm guest
	I0321 21:49:27.648160   64510 out.go:97] [download-only-931109] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 21:49:27.649690   64510 out.go:169] MINIKUBE_LOCATION=16124
	W0321 21:49:27.648281   64510 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball: no such file or directory
	I0321 21:49:27.648353   64510 notify.go:220] Checking for updates...
	I0321 21:49:27.652288   64510 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 21:49:27.653651   64510 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	I0321 21:49:27.655042   64510 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	I0321 21:49:27.656480   64510 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0321 21:49:27.659498   64510 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0321 21:49:27.659742   64510 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 21:49:27.693627   64510 out.go:97] Using the kvm2 driver based on user configuration
	I0321 21:49:27.693651   64510 start.go:295] selected driver: kvm2
	I0321 21:49:27.693656   64510 start.go:856] validating driver "kvm2" against <nil>
	I0321 21:49:27.693943   64510 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 21:49:27.694007   64510 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16124-57437/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0321 21:49:27.708472   64510 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0321 21:49:27.708523   64510 start_flags.go:305] no existing cluster config was found, will generate one from the flags 
	I0321 21:49:27.708995   64510 start_flags.go:386] Using suggested 6000MB memory alloc based on sys=32101MB, container=0MB
	I0321 21:49:27.709110   64510 start_flags.go:901] Wait components to verify : map[apiserver:true system_pods:true]
	I0321 21:49:27.709138   64510 cni.go:84] Creating CNI manager for ""
	I0321 21:49:27.709151   64510 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0321 21:49:27.709157   64510 start_flags.go:314] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0321 21:49:27.709167   64510 start_flags.go:319] config:
	{Name:download-only-931109 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-931109 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:49:27.709325   64510 iso.go:125] acquiring lock: {Name:mkfce26b31a4ea2eba60da091679606a7e7271e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 21:49:27.711015   64510 out.go:97] Downloading VM boot image ...
	I0321 21:49:27.711040   64510 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso.sha256 -> /home/jenkins/minikube-integration/16124-57437/.minikube/cache/iso/amd64/minikube-v1.29.0-1679074930-16079-amd64.iso
	I0321 21:49:30.161688   64510 out.go:97] Starting control plane node download-only-931109 in cluster download-only-931109
	I0321 21:49:30.161716   64510 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0321 21:49:30.185480   64510 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	I0321 21:49:30.185514   64510 cache.go:57] Caching tarball of preloaded images
	I0321 21:49:30.185672   64510 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime containerd
	I0321 21:49:30.187310   64510 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0321 21:49:30.187330   64510 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4 ...
	I0321 21:49:30.221277   64510 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:d96a2b2afa188e17db7ddabb58d563fd -> /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-931109"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/json-events (6.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-931109 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-931109 --force --alsologtostderr --kubernetes-version=v1.26.2 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (6.548776698s)
--- PASS: TestDownloadOnly/v1.26.2/json-events (6.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/preload-exists
--- PASS: TestDownloadOnly/v1.26.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-931109
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-931109: exit status 85 (61.692201ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-931109 | jenkins | v1.29.0 | 21 Mar 23 21:49 UTC |          |
	|         | -p download-only-931109        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-931109 | jenkins | v1.29.0 | 21 Mar 23 21:49 UTC |          |
	|         | -p download-only-931109        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.26.2   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/03/21 21:49:34
	Running on machine: ubuntu-20-agent-12
	Binary: Built with gc go1.20.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0321 21:49:34.392554   64545 out.go:296] Setting OutFile to fd 1 ...
	I0321 21:49:34.392666   64545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:49:34.392674   64545 out.go:309] Setting ErrFile to fd 2...
	I0321 21:49:34.392679   64545 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:49:34.392800   64545 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
	W0321 21:49:34.392902   64545 root.go:312] Error reading config file at /home/jenkins/minikube-integration/16124-57437/.minikube/config/config.json: open /home/jenkins/minikube-integration/16124-57437/.minikube/config/config.json: no such file or directory
	I0321 21:49:34.393319   64545 out.go:303] Setting JSON to true
	I0321 21:49:34.394068   64545 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9124,"bootTime":1679426250,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 21:49:34.394132   64545 start.go:135] virtualization: kvm guest
	I0321 21:49:34.396417   64545 out.go:97] [download-only-931109] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 21:49:34.397970   64545 out.go:169] MINIKUBE_LOCATION=16124
	I0321 21:49:34.396586   64545 notify.go:220] Checking for updates...
	I0321 21:49:34.400755   64545 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 21:49:34.402172   64545 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	I0321 21:49:34.403486   64545 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	I0321 21:49:34.404801   64545 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0321 21:49:34.407985   64545 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0321 21:49:34.408706   64545 config.go:182] Loaded profile config "download-only-931109": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.16.0
	W0321 21:49:34.408757   64545 start.go:764] api.Load failed for download-only-931109: filestore "download-only-931109": Docker machine "download-only-931109" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0321 21:49:34.408800   64545 driver.go:365] Setting default libvirt URI to qemu:///system
	W0321 21:49:34.408826   64545 start.go:764] api.Load failed for download-only-931109: filestore "download-only-931109": Docker machine "download-only-931109" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0321 21:49:34.440908   64545 out.go:97] Using the kvm2 driver based on existing profile
	I0321 21:49:34.440933   64545 start.go:295] selected driver: kvm2
	I0321 21:49:34.440938   64545 start.go:856] validating driver "kvm2" against &{Name:download-only-931109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-931109 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:49:34.441259   64545 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 21:49:34.441352   64545 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/16124-57437/.minikube/bin:/home/jenkins/workspace/KVM_Linux_containerd_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0321 21:49:34.455138   64545 install.go:137] /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2 version is 1.29.0
	I0321 21:49:34.455823   64545 cni.go:84] Creating CNI manager for ""
	I0321 21:49:34.455842   64545 cni.go:145] "kvm2" driver + "containerd" runtime found, recommending bridge
	I0321 21:49:34.455851   64545 start_flags.go:319] config:
	{Name:download-only-931109 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.26.2 ClusterName:download-only-931109 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:49:34.455966   64545 iso.go:125] acquiring lock: {Name:mkfce26b31a4ea2eba60da091679606a7e7271e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0321 21:49:34.457640   64545 out.go:97] Starting control plane node download-only-931109 in cluster download-only-931109
	I0321 21:49:34.457652   64545 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime containerd
	I0321 21:49:34.481714   64545 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4
	I0321 21:49:34.481745   64545 cache.go:57] Caching tarball of preloaded images
	I0321 21:49:34.481874   64545 preload.go:132] Checking if preload exists for k8s version v1.26.2 and runtime containerd
	I0321 21:49:34.483578   64545 out.go:97] Downloading Kubernetes v1.26.2 preload ...
	I0321 21:49:34.483592   64545 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4 ...
	I0321 21:49:34.510721   64545 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.26.2/preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:9732ab8cab6f650b8db71c83489fbd15 -> /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4
	I0321 21:49:39.163326   64545 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4 ...
	I0321 21:49:39.163426   64545 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/16124-57437/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.26.2-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-931109"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.26.2/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.35s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-931109
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.34s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:308: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-397814 --alsologtostderr --binary-mirror http://127.0.0.1:39117 --driver=kvm2  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-397814" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-397814
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (79.69s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-698632 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-698632 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=containerd: (1m18.317244797s)
helpers_test.go:175: Cleaning up "offline-containerd-698632" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-698632
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-698632: (1.377372063s)
--- PASS: TestOffline (79.69s)

                                                
                                    
x
+
TestAddons/Setup (116.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-248329 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-248329 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=helm-tiller: (1m56.26853871s)
--- PASS: TestAddons/Setup (116.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:295: registry stabilized in 13.989522ms
addons_test.go:297: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-lz4hc" [d86f4d35-1007-49e4-95e7-1777636548eb] Running
addons_test.go:297: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019074199s
addons_test.go:300: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nlzm8" [480f0d2a-f2db-49d7-8873-939b50cd1bf4] Running
addons_test.go:300: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010681331s
addons_test.go:305: (dbg) Run:  kubectl --context addons-248329 delete po -l run=registry-test --now
addons_test.go:310: (dbg) Run:  kubectl --context addons-248329 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:310: (dbg) Done: kubectl --context addons-248329 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.904809282s)
addons_test.go:324: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 ip
2023/03/21 21:51:54 [DEBUG] GET http://192.168.39.166:5000
addons_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.49s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.56s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:177: (dbg) Run:  kubectl --context addons-248329 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context addons-248329 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (7.045293375s)
addons_test.go:197: (dbg) Run:  kubectl --context addons-248329 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context addons-248329 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [bb85fc25-c96c-46a2-8727-1af97d763673] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [bb85fc25-c96c-46a2-8727-1af97d763673] Running
addons_test.go:215: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.029116387s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context addons-248329 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.166
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p addons-248329 addons disable ingress-dns --alsologtostderr -v=1: (1.063300309s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p addons-248329 addons disable ingress --alsologtostderr -v=1: (7.552206902s)
--- PASS: TestAddons/parallel/Ingress (29.56s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.89s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:372: metrics-server stabilized in 2.278657ms
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-5f8fcc9bb7-jv4ms" [8427399e-8ac1-409f-8dda-74a603632989] Running
addons_test.go:374: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.011081489s
addons_test.go:380: (dbg) Run:  kubectl --context addons-248329 top pods -n kube-system
addons_test.go:397: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.89s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.49s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:421: tiller-deploy stabilized in 12.189565ms
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-54cb789455-8klzm" [a5fa2b39-b48e-4bf3-bab6-a7363c8d28a3] Running
addons_test.go:423: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.01477132s
addons_test.go:438: (dbg) Run:  kubectl --context addons-248329 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:438: (dbg) Done: kubectl --context addons-248329 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.861470527s)
addons_test.go:455: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (67.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:526: csi-hostpath-driver pods stabilized in 5.192504ms
addons_test.go:529: (dbg) Run:  kubectl --context addons-248329 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:534: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:539: (dbg) Run:  kubectl --context addons-248329 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:544: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [56df5181-0078-44db-b004-65b618b32ca3] Pending
helpers_test.go:344: "task-pv-pod" [56df5181-0078-44db-b004-65b618b32ca3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [56df5181-0078-44db-b004-65b618b32ca3] Running
addons_test.go:544: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.009063319s
addons_test.go:549: (dbg) Run:  kubectl --context addons-248329 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:554: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-248329 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-248329 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:559: (dbg) Run:  kubectl --context addons-248329 delete pod task-pv-pod
addons_test.go:559: (dbg) Done: kubectl --context addons-248329 delete pod task-pv-pod: (1.290071713s)
addons_test.go:565: (dbg) Run:  kubectl --context addons-248329 delete pvc hpvc
addons_test.go:571: (dbg) Run:  kubectl --context addons-248329 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:576: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248329 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:581: (dbg) Run:  kubectl --context addons-248329 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:586: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [dc12645a-9a2d-4a44-9442-0b52c179ef50] Pending
helpers_test.go:344: "task-pv-pod-restore" [dc12645a-9a2d-4a44-9442-0b52c179ef50] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [dc12645a-9a2d-4a44-9442-0b52c179ef50] Running
addons_test.go:586: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.010604822s
addons_test.go:591: (dbg) Run:  kubectl --context addons-248329 delete pod task-pv-pod-restore
addons_test.go:595: (dbg) Run:  kubectl --context addons-248329 delete pvc hpvc-restore
addons_test.go:599: (dbg) Run:  kubectl --context addons-248329 delete volumesnapshot new-snapshot-demo
addons_test.go:603: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:603: (dbg) Done: out/minikube-linux-amd64 -p addons-248329 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.497003291s)
addons_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p addons-248329 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (67.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (10.3s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-248329 --alsologtostderr -v=1
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-248329 --alsologtostderr -v=1: (1.275665698s)
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-58c48fc87f-9tpgs" [2198b8b1-e0ab-4cfd-a1db-4a0635580abb] Pending
helpers_test.go:344: "headlamp-58c48fc87f-9tpgs" [2198b8b1-e0ab-4cfd-a1db-4a0635580abb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-58c48fc87f-9tpgs" [2198b8b1-e0ab-4cfd-a1db-4a0635580abb] Running
addons_test.go:794: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.024613133s
--- PASS: TestAddons/parallel/Headlamp (10.30s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-58d646969f-zd4vn" [4ed44f74-7dc5-48e5-bafe-ceb5ec993c75] Running
addons_test.go:810: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.011041728s
addons_test.go:813: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-248329
--- PASS: TestAddons/parallel/CloudSpanner (5.41s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:615: (dbg) Run:  kubectl --context addons-248329 create ns new-namespace
addons_test.go:629: (dbg) Run:  kubectl --context addons-248329 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (92.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:147: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-248329
addons_test.go:147: (dbg) Done: out/minikube-linux-amd64 stop -p addons-248329: (1m31.833486785s)
addons_test.go:151: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-248329
addons_test.go:155: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-248329
--- PASS: TestAddons/StoppedEnableDisable (92.01s)

                                                
                                    
x
+
TestCertOptions (73.05s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-356199 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-356199 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (1m11.137376108s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-356199 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-356199 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-356199 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-356199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-356199
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-356199: (1.29621541s)
--- PASS: TestCertOptions (73.05s)

                                                
                                    
x
+
TestCertExpiration (291.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-848146 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-848146 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m44.600085922s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-848146 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-848146 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (5.456065813s)
helpers_test.go:175: Cleaning up "cert-expiration-848146" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-848146
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-848146: (1.523518669s)
--- PASS: TestCertExpiration (291.58s)

                                                
                                    
x
+
TestForceSystemdFlag (94.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-141887 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-141887 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m33.59556582s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-141887 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-141887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-141887
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-141887: (1.176545147s)
--- PASS: TestForceSystemdFlag (94.98s)

                                                
                                    
x
+
TestForceSystemdEnv (81.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-748830 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:149: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-748830 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m20.25550907s)
docker_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-748830 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-748830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-748830
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-748830: (1.159120356s)
--- PASS: TestForceSystemdEnv (81.65s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (2.96s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (2.96s)

                                                
                                    
x
+
TestErrorSpam/setup (54.69s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-782459 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-782459 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-782459 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-782459 --driver=kvm2  --container-runtime=containerd: (54.687054177s)
--- PASS: TestErrorSpam/setup (54.69s)

                                                
                                    
x
+
TestErrorSpam/start (0.34s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 start --dry-run
--- PASS: TestErrorSpam/start (0.34s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.38s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 pause
--- PASS: TestErrorSpam/pause (1.38s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 unpause
--- PASS: TestErrorSpam/unpause (1.53s)

                                                
                                    
x
+
TestErrorSpam/stop (1.47s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 stop: (1.32631701s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-782459 --log_dir /tmp/nospam-782459 stop
--- PASS: TestErrorSpam/stop (1.47s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1850: local sync path: /home/jenkins/minikube-integration/16124-57437/.minikube/files/etc/test/nested/copy/64498/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (108.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2229: (dbg) Run:  out/minikube-linux-amd64 start -p functional-062573 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E0321 21:56:38.897955   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:38.903909   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:38.914152   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:38.934427   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:38.974685   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:39.054987   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:39.215486   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:39.536105   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:40.177031   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:41.458005   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:44.018300   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:49.139022   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:56:59.379464   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 21:57:19.860208   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
functional_test.go:2229: (dbg) Done: out/minikube-linux-amd64 start -p functional-062573 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (1m48.215913699s)
--- PASS: TestFunctional/serial/StartWithProxy (108.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:654: (dbg) Run:  out/minikube-linux-amd64 start -p functional-062573 --alsologtostderr -v=8
functional_test.go:654: (dbg) Done: out/minikube-linux-amd64 start -p functional-062573 --alsologtostderr -v=8: (5.956834297s)
functional_test.go:658: soft start took 5.957413744s for "functional-062573" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:676: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:691: (dbg) Run:  kubectl --context functional-062573 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 cache add k8s.gcr.io/pause:3.1
functional_test.go:1044: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 cache add k8s.gcr.io/pause:3.1: (1.031714385s)
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 cache add k8s.gcr.io/pause:3.3
functional_test.go:1044: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 cache add k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.98s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1072: (dbg) Run:  docker build -t minikube-local-cache-test:functional-062573 /tmp/TestFunctionalserialCacheCmdcacheadd_local2552554381/001
functional_test.go:1084: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 cache add minikube-local-cache-test:functional-062573
functional_test.go:1084: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 cache add minikube-local-cache-test:functional-062573: (1.163017497s)
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 cache delete minikube-local-cache-test:functional-062573
functional_test.go:1078: (dbg) Run:  docker rmi minikube-local-cache-test:functional-062573
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1105: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1142: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1148: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (222.786788ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 cache reload
functional_test.go:1153: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 cache reload: (1.223922962s)
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:711: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 kubectl -- --context functional-062573 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:736: (dbg) Run:  out/kubectl --context functional-062573 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:752: (dbg) Run:  out/minikube-linux-amd64 start -p functional-062573 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0321 21:58:00.821842   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
functional_test.go:752: (dbg) Done: out/minikube-linux-amd64 start -p functional-062573 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.170131412s)
functional_test.go:756: restart took 38.170235955s for "functional-062573" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:805: (dbg) Run:  kubectl --context functional-062573 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:820: etcd phase: Running
functional_test.go:830: etcd status: Ready
functional_test.go:820: kube-apiserver phase: Running
functional_test.go:830: kube-apiserver status: Ready
functional_test.go:820: kube-controller-manager phase: Running
functional_test.go:830: kube-controller-manager status: Ready
functional_test.go:820: kube-scheduler phase: Running
functional_test.go:830: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1231: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 logs
functional_test.go:1231: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 logs: (1.313536619s)
--- PASS: TestFunctional/serial/LogsCmd (1.31s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1245: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 logs --file /tmp/TestFunctionalserialLogsFileCmd1543310697/001/logs.txt
functional_test.go:1245: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 logs --file /tmp/TestFunctionalserialLogsFileCmd1543310697/001/logs.txt: (1.255327549s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 config get cpus: exit status 14 (59.492593ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 config set cpus 2
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 config get cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 config unset cpus
functional_test.go:1194: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 config get cpus
functional_test.go:1194: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 config get cpus: exit status 14 (55.074487ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:900: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-062573 --alsologtostderr -v=1]
functional_test.go:905: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-062573 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 70696: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:969: (dbg) Run:  out/minikube-linux-amd64 start -p functional-062573 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:969: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-062573 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (146.628081ms)

                                                
                                                
-- stdout --
	* [functional-062573] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 21:58:53.911618   70437 out.go:296] Setting OutFile to fd 1 ...
	I0321 21:58:53.911774   70437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:58:53.911787   70437 out.go:309] Setting ErrFile to fd 2...
	I0321 21:58:53.911795   70437 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:58:53.911944   70437 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
	I0321 21:58:53.912552   70437 out.go:303] Setting JSON to false
	I0321 21:58:53.913549   70437 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9684,"bootTime":1679426250,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 21:58:53.913606   70437 start.go:135] virtualization: kvm guest
	I0321 21:58:53.916115   70437 out.go:177] * [functional-062573] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 21:58:53.918273   70437 out.go:177]   - MINIKUBE_LOCATION=16124
	I0321 21:58:53.918269   70437 notify.go:220] Checking for updates...
	I0321 21:58:53.920204   70437 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 21:58:53.921869   70437 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	I0321 21:58:53.924515   70437 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	I0321 21:58:53.926215   70437 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0321 21:58:53.928505   70437 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0321 21:58:53.930571   70437 config.go:182] Loaded profile config "functional-062573": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0321 21:58:53.931094   70437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 21:58:53.931166   70437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 21:58:53.947185   70437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33339
	I0321 21:58:53.947675   70437 main.go:141] libmachine: () Calling .GetVersion
	I0321 21:58:53.948342   70437 main.go:141] libmachine: Using API Version  1
	I0321 21:58:53.948366   70437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 21:58:53.948791   70437 main.go:141] libmachine: () Calling .GetMachineName
	I0321 21:58:53.949051   70437 main.go:141] libmachine: (functional-062573) Calling .DriverName
	I0321 21:58:53.949257   70437 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 21:58:53.949676   70437 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 21:58:53.949734   70437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 21:58:53.965466   70437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43025
	I0321 21:58:53.965799   70437 main.go:141] libmachine: () Calling .GetVersion
	I0321 21:58:53.966313   70437 main.go:141] libmachine: Using API Version  1
	I0321 21:58:53.966334   70437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 21:58:53.966606   70437 main.go:141] libmachine: () Calling .GetMachineName
	I0321 21:58:53.966815   70437 main.go:141] libmachine: (functional-062573) Calling .DriverName
	I0321 21:58:54.002763   70437 out.go:177] * Using the kvm2 driver based on existing profile
	I0321 21:58:54.004133   70437 start.go:295] selected driver: kvm2
	I0321 21:58:54.004144   70437 start.go:856] validating driver "kvm2" against &{Name:functional-062573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.2 ClusterName:functional-062573 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:58:54.004260   70437 start.go:867] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0321 21:58:54.006400   70437 out.go:177] 
	W0321 21:58:54.007881   70437 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0321 21:58:54.009357   70437 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:986: (dbg) Run:  out/minikube-linux-amd64 start -p functional-062573 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1015: (dbg) Run:  out/minikube-linux-amd64 start -p functional-062573 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1015: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-062573 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (510.032149ms)

                                                
                                                
-- stdout --
	* [functional-062573] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 21:58:54.225737   70528 out.go:296] Setting OutFile to fd 1 ...
	I0321 21:58:54.225960   70528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:58:54.225976   70528 out.go:309] Setting ErrFile to fd 2...
	I0321 21:58:54.225984   70528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 21:58:54.226226   70528 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
	I0321 21:58:54.226921   70528 out.go:303] Setting JSON to false
	I0321 21:58:54.228067   70528 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":9684,"bootTime":1679426250,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 21:58:54.228172   70528 start.go:135] virtualization: kvm guest
	I0321 21:58:54.272087   70528 out.go:177] * [functional-062573] minikube v1.29.0 sur Ubuntu 20.04 (kvm/amd64)
	I0321 21:58:54.274025   70528 out.go:177]   - MINIKUBE_LOCATION=16124
	I0321 21:58:54.278902   70528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 21:58:54.274058   70528 notify.go:220] Checking for updates...
	I0321 21:58:54.281478   70528 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	I0321 21:58:54.282781   70528 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	I0321 21:58:54.284055   70528 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0321 21:58:54.285405   70528 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0321 21:58:54.287011   70528 config.go:182] Loaded profile config "functional-062573": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0321 21:58:54.287422   70528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 21:58:54.287457   70528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 21:58:54.302377   70528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40471
	I0321 21:58:54.302839   70528 main.go:141] libmachine: () Calling .GetVersion
	I0321 21:58:54.303546   70528 main.go:141] libmachine: Using API Version  1
	I0321 21:58:54.303565   70528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 21:58:54.303942   70528 main.go:141] libmachine: () Calling .GetMachineName
	I0321 21:58:54.304170   70528 main.go:141] libmachine: (functional-062573) Calling .DriverName
	I0321 21:58:54.304360   70528 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 21:58:54.304765   70528 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 21:58:54.304801   70528 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 21:58:54.324054   70528 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46727
	I0321 21:58:54.324374   70528 main.go:141] libmachine: () Calling .GetVersion
	I0321 21:58:54.324781   70528 main.go:141] libmachine: Using API Version  1
	I0321 21:58:54.324796   70528 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 21:58:54.325892   70528 main.go:141] libmachine: () Calling .GetMachineName
	I0321 21:58:54.326084   70528 main.go:141] libmachine: (functional-062573) Calling .DriverName
	I0321 21:58:54.432480   70528 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0321 21:58:54.513042   70528 start.go:295] selected driver: kvm2
	I0321 21:58:54.513064   70528 start.go:856] validating driver "kvm2" against &{Name:functional-062573 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/16079/minikube-v1.29.0-1679074930-16079-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.37-1679075007-16079@sha256:9d2bf610e530f47c7e43df9719717075e4c7ec510aea6047b29208bffe7f1978 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.26.2 ClusterName:functional-062573 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.243 Port:8441 KubernetesVersion:v1.26.2 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:}
	I0321 21:58:54.513180   70528 start.go:867] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0321 21:58:54.615518   70528 out.go:177] 
	W0321 21:58:54.617325   70528 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0321 21:58:54.662647   70528 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:849: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 status
functional_test.go:855: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:867: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-062573 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1633: (dbg) Run:  kubectl --context functional-062573 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-5cf7cc858f-k5h9x" [af95d7cf-8d9e-47f5-8791-58511d13e580] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-5cf7cc858f-k5h9x" [af95d7cf-8d9e-47f5-8791-58511d13e580] Running
functional_test.go:1638: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006834279s
functional_test.go:1647: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 service hello-node-connect --url
functional_test.go:1653: found endpoint for hello-node-connect: http://192.168.39.243:32742
functional_test.go:1673: http://192.168.39.243:32742: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-5cf7cc858f-k5h9x

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.243:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.243:32742
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1688: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 addons list
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (42.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [7de80507-f185-4859-a794-cff76d7f9931] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.025469377s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-062573 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-062573 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-062573 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-062573 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-062573 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3c3acb82-9ef7-4b2b-976e-fa706fddb3d0] Pending
helpers_test.go:344: "sp-pod" [3c3acb82-9ef7-4b2b-976e-fa706fddb3d0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3c3acb82-9ef7-4b2b-976e-fa706fddb3d0] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 19.020740127s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-062573 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-062573 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-062573 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e529899d-eafd-4f37-b0e0-85664c27d6b0] Pending
helpers_test.go:344: "sp-pod" [e529899d-eafd-4f37-b0e0-85664c27d6b0] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e529899d-eafd-4f37-b0e0-85664c27d6b0] Running
2023/03/21 21:59:07 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.014950567s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-062573 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (42.19s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1723: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "echo hello"
functional_test.go:1740: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh -n functional-062573 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 cp functional-062573:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3751482973/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh -n functional-062573 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1788: (dbg) Run:  kubectl --context functional-062573 replace --force -f testdata/mysql.yaml
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-888f84dd9-hcv4k" [05b47efd-ebbf-4dff-a3be-903b9709bd96] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-888f84dd9-hcv4k" [05b47efd-ebbf-4dff-a3be-903b9709bd96] Running
functional_test.go:1794: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 19.014587519s
functional_test.go:1802: (dbg) Run:  kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;": exit status 1 (238.071259ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;": exit status 1 (205.885352ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;": exit status 1 (223.008336ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;"
functional_test.go:1802: (dbg) Non-zero exit: kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;": exit status 1 (173.444517ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1802: (dbg) Run:  kubectl --context functional-062573 exec mysql-888f84dd9-hcv4k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.55s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1924: Checking for existence of /etc/test/nested/copy/64498/hosts within VM
functional_test.go:1926: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo cat /etc/test/nested/copy/64498/hosts"
functional_test.go:1931: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1967: Checking for existence of /etc/ssl/certs/64498.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo cat /etc/ssl/certs/64498.pem"
functional_test.go:1967: Checking for existence of /usr/share/ca-certificates/64498.pem within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo cat /usr/share/ca-certificates/64498.pem"
functional_test.go:1967: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1968: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/644982.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo cat /etc/ssl/certs/644982.pem"
functional_test.go:1994: Checking for existence of /usr/share/ca-certificates/644982.pem within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo cat /usr/share/ca-certificates/644982.pem"
functional_test.go:1994: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1995: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:217: (dbg) Run:  kubectl --context functional-062573 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo systemctl is-active docker"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 ssh "sudo systemctl is-active docker": exit status 1 (233.532538ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2022: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo systemctl is-active crio"
functional_test.go:2022: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 ssh "sudo systemctl is-active crio": exit status 1 (219.789735ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2283: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-062573 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1443: (dbg) Run:  kubectl --context functional-062573 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6fddd6858d-nbptz" [56eff035-aa80-4a18-8e23-46e6710dd237] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6fddd6858d-nbptz" [56eff035-aa80-4a18-8e23-46e6710dd237] Running
functional_test.go:1448: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.011032933s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.23s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2251: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2265: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls --format short
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-062573 image ls --format short:
registry.k8s.io/pause:3.9
registry.k8s.io/kube-scheduler:v1.26.2
registry.k8s.io/kube-proxy:v1.26.2
registry.k8s.io/kube-controller-manager:v1.26.2
registry.k8s.io/kube-apiserver:v1.26.2
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-062573
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-062573
docker.io/kindest/kindnetd:v20221004-44d545d1
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls --format table
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-062573 image ls --format table:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| gcr.io/google-containers/addon-resizer      | functional-062573  | sha256:ffd4cf | 10.8MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:56cc51 | 2.4MB  |
| k8s.gcr.io/pause                            | latest             | sha256:350b16 | 72.3kB |
| registry.k8s.io/kube-apiserver              | v1.26.2            | sha256:63d323 | 35.3MB |
| docker.io/library/mysql                     | 5.7                | sha256:b6ee22 | 130MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3             | sha256:5185b9 | 14.8MB |
| registry.k8s.io/kube-proxy                  | v1.26.2            | sha256:6f64e7 | 21.5MB |
| registry.k8s.io/kube-scheduler              | v1.26.2            | sha256:db8f40 | 17.5MB |
| registry.k8s.io/pause                       | 3.9                | sha256:e6f181 | 322kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:6e38f4 | 9.06MB |
| docker.io/library/minikube-local-cache-test | functional-062573  | sha256:a54a1b | 1.12kB |
| docker.io/library/nginx                     | latest             | sha256:904b8c | 56.9MB |
| k8s.gcr.io/echoserver                       | 1.8                | sha256:82e4c8 | 46.2MB |
| k8s.gcr.io/pause                            | 3.1                | sha256:da86e6 | 315kB  |
| k8s.gcr.io/pause                            | 3.3                | sha256:0184c1 | 298kB  |
| registry.k8s.io/etcd                        | 3.5.6-0            | sha256:fce326 | 103MB  |
| registry.k8s.io/kube-controller-manager     | v1.26.2            | sha256:240e20 | 32.2MB |
| docker.io/kindest/kindnetd                  | v20221004-44d545d1 | sha256:d6e3e2 | 25.8MB |
|---------------------------------------------|--------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls --format json
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-062573 image ls --format json:
[{"id":"sha256:a54a1bf2768738f7dd6b65ddbb1864706fccf48fd210fb589e21718808b061ff","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-062573"],"size":"1125"},{"id":"sha256:b6ee2207ee7a9ed4f5c718a507fd00dace311300153b99f6830ce34741f2f093","repoDigests":["docker.io/library/mysql@sha256:1780318bdabc0edd36907bf91b47632eb912e8ea91258eca3590f8aca6f54836"],"repoTags":["docker.io/library/mysql:5.7"],"size":"130048291"},{"id":"sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"19746404"},{"id":"sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"46237695"},{"id":"sha256:240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644a
c1e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:5434d52f88eb16bc5e98ccb65e97e97cb5cf7861749afbf26174d27c4ece1fad"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.26.2"],"size":"32180749"},{"id":"sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f","repoDigests":["docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe"],"repoTags":["docker.io/kindest/kindnetd:v20221004-44d545d1"],"size":"25830582"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"297686"},{"id":"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7","repoDigests":["registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c"],"repoTags":["registry.k8s.io/etcd:3.5.6-0"],"size":"102542580"},{"id":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:
7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"321520"},{"id":"sha256:904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8","repoDigests":["docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2"],"repoTags":["docker.io/library/nginx:latest"],"size":"56897427"},{"id":"sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-062573"],"size":"10823156"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"315399"},{"id":"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":["registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a"],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"14837849"},{"id":"sha256:6f64e7135a6ec
1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d","repoDigests":["registry.k8s.io/kube-proxy@sha256:5dac6611aceb1452a5d4036108a15ceb0699c083a942977e30640d521e7d2078"],"repoTags":["registry.k8s.io/kube-proxy:v1.26.2"],"size":"21541935"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"72306"},{"id":"sha256:63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255
901","repoDigests":["registry.k8s.io/kube-apiserver@sha256:0f03b93af45f39704b7da175db31e20da63d2ab369f350e59de8cbbef9d703e0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.26.2"],"size":"35329425"},{"id":"sha256:db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c","repoDigests":["registry.k8s.io/kube-scheduler@sha256:da109877fd8fd0feba2f9a4cb6a199797452c17ddcfaf7b023cf0bac09e51417"],"repoTags":["registry.k8s.io/kube-scheduler:v1.26.2"],"size":"17489559"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls --format yaml
functional_test.go:264: (dbg) Stdout: out/minikube-linux-amd64 -p functional-062573 image ls --format yaml:
- id: sha256:d6e3e26021b60c625f0ef5b2dd3f9e22d2d398e05bccc4fdd7d59fbbb6a04d3f
repoDigests:
- docker.io/kindest/kindnetd@sha256:273469d84ede51824194a31f6a405e3d3686b8b87cd161ea40f6bc3ff8e04ffe
repoTags:
- docker.io/kindest/kindnetd:v20221004-44d545d1
size: "25830582"
- id: sha256:a54a1bf2768738f7dd6b65ddbb1864706fccf48fd210fb589e21718808b061ff
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-062573
size: "1125"
- id: sha256:b6ee2207ee7a9ed4f5c718a507fd00dace311300153b99f6830ce34741f2f093
repoDigests:
- docker.io/library/mysql@sha256:1780318bdabc0edd36907bf91b47632eb912e8ea91258eca3590f8aca6f54836
repoTags:
- docker.io/library/mysql:5.7
size: "130048291"
- id: sha256:240e201d5b0d8c6ae66764165080c22834e3a9fed050cf5780211d973644ac1e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:5434d52f88eb16bc5e98ccb65e97e97cb5cf7861749afbf26174d27c4ece1fad
repoTags:
- registry.k8s.io/kube-controller-manager:v1.26.2
size: "32180749"
- id: sha256:6f64e7135a6ec1adfb0c12e1864b0e8392facac43717a2c6911550740ab3992d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:5dac6611aceb1452a5d4036108a15ceb0699c083a942977e30640d521e7d2078
repoTags:
- registry.k8s.io/kube-proxy:v1.26.2
size: "21541935"
- id: sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "321520"
- id: sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7
repoDigests:
- registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c
repoTags:
- registry.k8s.io/etcd:3.5.6-0
size: "102542580"
- id: sha256:115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "19746404"
- id: sha256:904b8cb13b932e23230836850610fa45dce9eb0650d5618c2b1487c2a4f577b8
repoDigests:
- docker.io/library/nginx@sha256:aa0afebbb3cfa473099a62c4b32e9b3fb73ed23f2a75a65ce1d4b4f55a5c2ef2
repoTags:
- docker.io/library/nginx:latest
size: "56897427"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- k8s.gcr.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "46237695"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "72306"
- id: sha256:db8f409d9a5d7c775876eb5e4e0c69089eff801fefbd8a356621a7b0f640f58c
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:da109877fd8fd0feba2f9a4cb6a199797452c17ddcfaf7b023cf0bac09e51417
repoTags:
- registry.k8s.io/kube-scheduler:v1.26.2
size: "17489559"
- id: sha256:ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-062573
size: "10823156"
- id: sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "14837849"
- id: sha256:63d3239c3c159b1db368f8cf0d597bef7bd4c82e15cd1b99a93fc7b50f255901
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:0f03b93af45f39704b7da175db31e20da63d2ab369f350e59de8cbbef9d703e0
repoTags:
- registry.k8s.io/kube-apiserver:v1.26.2
size: "35329425"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh pgrep buildkitd
functional_test.go:306: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 ssh pgrep buildkitd: exit status 1 (225.535735ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:313: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image build -t localhost/my-image:functional-062573 testdata/build
functional_test.go:313: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 image build -t localhost/my-image:functional-062573 testdata/build: (2.960746349s)
functional_test.go:321: (dbg) Stderr: out/minikube-linux-amd64 -p functional-062573 image build -t localhost/my-image:functional-062573 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 transferring context: 2B done
#2 DONE 0.1s

                                                
                                                
#3 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#3 DONE 0.3s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 1.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:86804080dc66147670c490bdc75bbd9b8d7095d38063fa604b4a532f20d0afec 0.0s done
#8 exporting config sha256:f215da67ae8e176e1fa315317d7d998f1507f00ee731cd61c619871a8cdca863 0.0s done
#8 naming to localhost/my-image:functional-062573
#8 naming to localhost/my-image:functional-062573 done
#8 DONE 0.2s
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:340: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:345: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-062573
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2114: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:353: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image load --daemon gcr.io/google-containers/addon-resizer:functional-062573
functional_test.go:353: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 image load --daemon gcr.io/google-containers/addon-resizer:functional-062573: (4.603541338s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image load --daemon gcr.io/google-containers/addon-resizer:functional-062573
functional_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 image load --daemon gcr.io/google-containers/addon-resizer:functional-062573: (4.302584703s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:233: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:238: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-062573
functional_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image load --daemon gcr.io/google-containers/addon-resizer:functional-062573
functional_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 image load --daemon gcr.io/google-containers/addon-resizer:functional-062573: (4.623608307s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1457: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1487: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 service list -o json
functional_test.go:1492: Took "312.047109ms" to run "out/minikube-linux-amd64 -p functional-062573 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1507: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 service --namespace=default --https --url hello-node
functional_test.go:1520: found endpoint: https://192.168.39.243:32198
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1538: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1557: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 service hello-node --url
functional_test.go:1563: found endpoint for hello-node: http://192.168.39.243:32198
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:378: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image save gcr.io/google-containers/addon-resizer:functional-062573 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:378: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 image save gcr.io/google-containers/addon-resizer:functional-062573 /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (1.616482283s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:390: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image rm gcr.io/google-containers/addon-resizer:functional-062573
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/addon-resizer-save.tar: (2.075043808s)
functional_test.go:446: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:417: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-062573
functional_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 image save --daemon gcr.io/google-containers/addon-resizer:functional-062573
functional_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p functional-062573 image save --daemon gcr.io/google-containers/addon-resizer:functional-062573: (1.518149988s)
functional_test.go:427: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-062573
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1268: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1273: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:69: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-062573 /tmp/TestFunctionalparallelMountCmdany-port340962443/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:103: wrote "test-1679435932977907844" to /tmp/TestFunctionalparallelMountCmdany-port340962443/001/created-by-test
functional_test_mount_test.go:103: wrote "test-1679435932977907844" to /tmp/TestFunctionalparallelMountCmdany-port340962443/001/created-by-test-removed-by-pod
functional_test_mount_test.go:103: wrote "test-1679435932977907844" to /tmp/TestFunctionalparallelMountCmdany-port340962443/001/test-1679435932977907844
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:111: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (231.040092ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh -- ls -la /mount-9p
functional_test_mount_test.go:129: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 21 21:58 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 21 21:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 21 21:58 test-1679435932977907844
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh cat /mount-9p/test-1679435932977907844
functional_test_mount_test.go:144: (dbg) Run:  kubectl --context functional-062573 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4e3c2e78-9dac-4c6e-a642-bc9e30b64ab0] Pending
helpers_test.go:344: "busybox-mount" [4e3c2e78-9dac-4c6e-a642-bc9e30b64ab0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4e3c2e78-9dac-4c6e-a642-bc9e30b64ab0] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4e3c2e78-9dac-4c6e-a642-bc9e30b64ab0] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:149: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.013686604s
functional_test_mount_test.go:165: (dbg) Run:  kubectl --context functional-062573 logs busybox-mount
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:177: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:90: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-062573 /tmp/TestFunctionalparallelMountCmdany-port340962443/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1308: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1313: Took "294.618266ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1327: Took "49.740202ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1359: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1364: Took "284.581379ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1372: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1377: Took "48.441253ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:209: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-062573 /tmp/TestFunctionalparallelMountCmdspecific-port1443022322/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (257.785901ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:239: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:253: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh -- ls -la /mount-9p
functional_test_mount_test.go:257: guest mount directory contents
total 0
functional_test_mount_test.go:259: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-062573 /tmp/TestFunctionalparallelMountCmdspecific-port1443022322/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:260: reading mount text
functional_test_mount_test.go:274: done reading mount text
functional_test_mount_test.go:226: (dbg) Run:  out/minikube-linux-amd64 -p functional-062573 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:226: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-062573 ssh "sudo umount -f /mount-9p": exit status 1 (217.100868ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:228: "out/minikube-linux-amd64 -p functional-062573 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:230: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-062573 /tmp/TestFunctionalparallelMountCmdspecific-port1443022322/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:188: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-062573
--- PASS: TestFunctional/delete_addon-resizer_images (0.16s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:196: (dbg) Run:  docker rmi -f localhost/my-image:functional-062573
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:204: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-062573
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (110.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-557517 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E0321 21:59:22.742868   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-557517 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m50.556403138s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (110.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.76s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-557517 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-557517 addons enable ingress --alsologtostderr -v=5: (10.759486819s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (10.76s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.35s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-557517 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.35s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (40.63s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:177: (dbg) Run:  kubectl --context ingress-addon-legacy-557517 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:177: (dbg) Done: kubectl --context ingress-addon-legacy-557517 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (16.507085326s)
addons_test.go:197: (dbg) Run:  kubectl --context ingress-addon-legacy-557517 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:210: (dbg) Run:  kubectl --context ingress-addon-legacy-557517 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a529f9f2-c323-4916-a581-f271b099ad39] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a529f9f2-c323-4916-a581-f271b099ad39] Running
E0321 22:01:38.898155   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
addons_test.go:215: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 9.016323121s
addons_test.go:227: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-557517 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:251: (dbg) Run:  kubectl --context ingress-addon-legacy-557517 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-557517 ip
addons_test.go:262: (dbg) Run:  nslookup hello-john.test 192.168.39.14
addons_test.go:271: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-557517 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:271: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-557517 addons disable ingress-dns --alsologtostderr -v=1: (6.641829892s)
addons_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-557517 addons disable ingress --alsologtostderr -v=1
addons_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-557517 addons disable ingress --alsologtostderr -v=1: (7.306867486s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (40.63s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-075087 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd
E0321 22:02:06.584641   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-075087 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=containerd: (1m22.163977265s)
--- PASS: TestJSONOutput/start/Command (82.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-075087 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.56s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-075087 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.56s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (17.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-075087 --output=json --user=testUser
E0321 22:03:29.736590   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:29.741855   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:29.752115   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:29.772355   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:29.812605   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:29.892884   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:30.053183   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:30.373761   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:31.014679   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:32.294923   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:34.855803   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-075087 --output=json --user=testUser: (17.108521195s)
--- PASS: TestJSONOutput/stop/Command (17.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.44s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-388627 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-388627 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (69.158095ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a11cb406-9dc2-4092-a0bb-7dc6f210e524","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-388627] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"84e97e7e-412d-491e-bcb1-60cb68393b8b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=16124"}}
	{"specversion":"1.0","id":"4e8e50c7-b0e2-49eb-ab9d-f232379224b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dc7f12dd-bc97-4a5f-855c-40d3debc5b19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig"}}
	{"specversion":"1.0","id":"8ab2bf14-9cfe-4a5a-beea-d49b0971d921","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube"}}
	{"specversion":"1.0","id":"851f07a9-17f2-4271-9110-ebe1e7f330f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ebb4707f-e551-49a2-b11b-62ad5d71806f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"fa864cf3-baf2-4a08-a8f3-5c7b9dfaf813","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-388627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-388627
--- PASS: TestErrorJSONOutput (0.44s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (109.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-507148 --driver=kvm2  --container-runtime=containerd
E0321 22:03:39.976720   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:03:50.217714   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:04:10.698475   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-507148 --driver=kvm2  --container-runtime=containerd: (52.036263488s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-510575 --driver=kvm2  --container-runtime=containerd
E0321 22:04:51.659813   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-510575 --driver=kvm2  --container-runtime=containerd: (54.451484307s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-507148
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-510575
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-510575" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-510575
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-510575: (1.023812458s)
helpers_test.go:175: Cleaning up "first-507148" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-507148
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-507148: (1.052882904s)
--- PASS: TestMinikubeProfile (109.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-379961 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-379961 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (25.810487063s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-379961 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-379961 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.39s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-398819 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E0321 22:06:13.583742   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:06:15.066439   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:15.071697   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:15.081921   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:15.102171   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:15.142538   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:15.222791   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:15.383159   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:15.703436   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:16.344396   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:17.625003   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:20.186045   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-398819 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (26.590034053s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-398819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-398819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.37s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-379961 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-398819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-398819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.37s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.15s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-398819
E0321 22:06:25.306313   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-398819: (1.152261164s)
--- PASS: TestMountStart/serial/Stop (1.15s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-398819
E0321 22:06:35.546533   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:06:38.898099   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-398819: (22.425215038s)
--- PASS: TestMountStart/serial/RestartStopped (23.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-398819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-398819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.38s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (141.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-508124 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0321 22:06:56.027215   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:07:36.988211   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:08:29.737366   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:08:57.425835   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:08:58.908799   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p multinode-508124 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (2m21.478051608s)
multinode_test.go:89: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (141.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-508124 -- rollout status deployment/busybox: (2.016442572s)
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-jkpnp -- nslookup kubernetes.io
multinode_test.go:511: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-mmbrd -- nslookup kubernetes.io
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-jkpnp -- nslookup kubernetes.default
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-mmbrd -- nslookup kubernetes.default
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-jkpnp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:529: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-mmbrd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:539: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-jkpnp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-jkpnp -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:547: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-mmbrd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:558: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-508124 -- exec busybox-6b86dd6d48-mmbrd -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (67.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-508124 -v 3 --alsologtostderr
multinode_test.go:108: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-508124 -v 3 --alsologtostderr: (1m7.249651509s)
multinode_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (67.81s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp testdata/cp-test.txt multinode-508124:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2961632535/001/cp-test_multinode-508124.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124:/home/docker/cp-test.txt multinode-508124-m02:/home/docker/cp-test_multinode-508124_multinode-508124-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m02 "sudo cat /home/docker/cp-test_multinode-508124_multinode-508124-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124:/home/docker/cp-test.txt multinode-508124-m03:/home/docker/cp-test_multinode-508124_multinode-508124-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m03 "sudo cat /home/docker/cp-test_multinode-508124_multinode-508124-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp testdata/cp-test.txt multinode-508124-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2961632535/001/cp-test_multinode-508124-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124-m02:/home/docker/cp-test.txt multinode-508124:/home/docker/cp-test_multinode-508124-m02_multinode-508124.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124 "sudo cat /home/docker/cp-test_multinode-508124-m02_multinode-508124.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124-m02:/home/docker/cp-test.txt multinode-508124-m03:/home/docker/cp-test_multinode-508124-m02_multinode-508124-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m03 "sudo cat /home/docker/cp-test_multinode-508124-m02_multinode-508124-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp testdata/cp-test.txt multinode-508124-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2961632535/001/cp-test_multinode-508124-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124-m03:/home/docker/cp-test.txt multinode-508124:/home/docker/cp-test_multinode-508124-m03_multinode-508124.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124 "sudo cat /home/docker/cp-test_multinode-508124-m03_multinode-508124.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 cp multinode-508124-m03:/home/docker/cp-test.txt multinode-508124-m02:/home/docker/cp-test_multinode-508124-m03_multinode-508124-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 ssh -n multinode-508124-m02 "sudo cat /home/docker/cp-test_multinode-508124-m03_multinode-508124-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-linux-amd64 -p multinode-508124 node stop m03: (1.256543283s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-508124 status: exit status 7 (419.888739ms)

                                                
                                                
-- stdout --
	multinode-508124
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-508124-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-508124-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-508124 status --alsologtostderr: exit status 7 (410.481736ms)

                                                
                                                
-- stdout --
	multinode-508124
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-508124-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-508124-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 22:10:33.833893   77020 out.go:296] Setting OutFile to fd 1 ...
	I0321 22:10:33.834058   77020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:10:33.834069   77020 out.go:309] Setting ErrFile to fd 2...
	I0321 22:10:33.834077   77020 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:10:33.834438   77020 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
	I0321 22:10:33.834790   77020 out.go:303] Setting JSON to false
	I0321 22:10:33.834841   77020 mustload.go:65] Loading cluster: multinode-508124
	I0321 22:10:33.835254   77020 notify.go:220] Checking for updates...
	I0321 22:10:33.836040   77020 config.go:182] Loaded profile config "multinode-508124": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0321 22:10:33.836063   77020 status.go:255] checking status of multinode-508124 ...
	I0321 22:10:33.836464   77020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:10:33.836515   77020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:10:33.857686   77020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43787
	I0321 22:10:33.858073   77020 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:10:33.858627   77020 main.go:141] libmachine: Using API Version  1
	I0321 22:10:33.858653   77020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:10:33.858957   77020 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:10:33.859259   77020 main.go:141] libmachine: (multinode-508124) Calling .GetState
	I0321 22:10:33.860849   77020 status.go:330] multinode-508124 host status = "Running" (err=<nil>)
	I0321 22:10:33.860867   77020 host.go:66] Checking if "multinode-508124" exists ...
	I0321 22:10:33.861127   77020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:10:33.861159   77020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:10:33.875790   77020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44493
	I0321 22:10:33.876141   77020 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:10:33.876618   77020 main.go:141] libmachine: Using API Version  1
	I0321 22:10:33.876644   77020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:10:33.876956   77020 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:10:33.877133   77020 main.go:141] libmachine: (multinode-508124) Calling .GetIP
	I0321 22:10:33.879969   77020 main.go:141] libmachine: (multinode-508124) DBG | domain multinode-508124 has defined MAC address 52:54:00:d3:c5:74 in network mk-multinode-508124
	I0321 22:10:33.880425   77020 main.go:141] libmachine: (multinode-508124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:c5:74", ip: ""} in network mk-multinode-508124: {Iface:virbr1 ExpiryTime:2023-03-21 23:07:05 +0000 UTC Type:0 Mac:52:54:00:d3:c5:74 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-508124 Clientid:01:52:54:00:d3:c5:74}
	I0321 22:10:33.880464   77020 main.go:141] libmachine: (multinode-508124) DBG | domain multinode-508124 has defined IP address 192.168.39.123 and MAC address 52:54:00:d3:c5:74 in network mk-multinode-508124
	I0321 22:10:33.880594   77020 host.go:66] Checking if "multinode-508124" exists ...
	I0321 22:10:33.880948   77020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:10:33.880988   77020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:10:33.894264   77020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0321 22:10:33.894589   77020 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:10:33.894967   77020 main.go:141] libmachine: Using API Version  1
	I0321 22:10:33.894999   77020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:10:33.895293   77020 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:10:33.895498   77020 main.go:141] libmachine: (multinode-508124) Calling .DriverName
	I0321 22:10:33.895684   77020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0321 22:10:33.895717   77020 main.go:141] libmachine: (multinode-508124) Calling .GetSSHHostname
	I0321 22:10:33.898536   77020 main.go:141] libmachine: (multinode-508124) DBG | domain multinode-508124 has defined MAC address 52:54:00:d3:c5:74 in network mk-multinode-508124
	I0321 22:10:33.898932   77020 main.go:141] libmachine: (multinode-508124) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d3:c5:74", ip: ""} in network mk-multinode-508124: {Iface:virbr1 ExpiryTime:2023-03-21 23:07:05 +0000 UTC Type:0 Mac:52:54:00:d3:c5:74 Iaid: IPaddr:192.168.39.123 Prefix:24 Hostname:multinode-508124 Clientid:01:52:54:00:d3:c5:74}
	I0321 22:10:33.898961   77020 main.go:141] libmachine: (multinode-508124) DBG | domain multinode-508124 has defined IP address 192.168.39.123 and MAC address 52:54:00:d3:c5:74 in network mk-multinode-508124
	I0321 22:10:33.899117   77020 main.go:141] libmachine: (multinode-508124) Calling .GetSSHPort
	I0321 22:10:33.899308   77020 main.go:141] libmachine: (multinode-508124) Calling .GetSSHKeyPath
	I0321 22:10:33.899446   77020 main.go:141] libmachine: (multinode-508124) Calling .GetSSHUsername
	I0321 22:10:33.899575   77020 sshutil.go:53] new ssh client: &{IP:192.168.39.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/multinode-508124/id_rsa Username:docker}
	I0321 22:10:33.985222   77020 ssh_runner.go:195] Run: systemctl --version
	I0321 22:10:33.990990   77020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:10:34.003225   77020 kubeconfig.go:92] found "multinode-508124" server: "https://192.168.39.123:8443"
	I0321 22:10:34.003247   77020 api_server.go:165] Checking apiserver status ...
	I0321 22:10:34.003280   77020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0321 22:10:34.015213   77020 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1119/cgroup
	I0321 22:10:34.022691   77020 api_server.go:181] apiserver freezer: "9:freezer:/kubepods/burstable/podba0fe7c0c7d0f5c6dba06066a66a90af/2f602274b4839b4816641410a358a302e8f0670ca3929291e197417f582edb2b"
	I0321 22:10:34.022748   77020 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podba0fe7c0c7d0f5c6dba06066a66a90af/2f602274b4839b4816641410a358a302e8f0670ca3929291e197417f582edb2b/freezer.state
	I0321 22:10:34.030871   77020 api_server.go:203] freezer state: "THAWED"
	I0321 22:10:34.030887   77020 api_server.go:252] Checking apiserver healthz at https://192.168.39.123:8443/healthz ...
	I0321 22:10:34.035151   77020 api_server.go:278] https://192.168.39.123:8443/healthz returned 200:
	ok
	I0321 22:10:34.035170   77020 status.go:421] multinode-508124 apiserver status = Running (err=<nil>)
	I0321 22:10:34.035181   77020 status.go:257] multinode-508124 status: &{Name:multinode-508124 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0321 22:10:34.035206   77020 status.go:255] checking status of multinode-508124-m02 ...
	I0321 22:10:34.035547   77020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:10:34.035590   77020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:10:34.049234   77020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38049
	I0321 22:10:34.049612   77020 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:10:34.050037   77020 main.go:141] libmachine: Using API Version  1
	I0321 22:10:34.050057   77020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:10:34.050355   77020 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:10:34.050522   77020 main.go:141] libmachine: (multinode-508124-m02) Calling .GetState
	I0321 22:10:34.051915   77020 status.go:330] multinode-508124-m02 host status = "Running" (err=<nil>)
	I0321 22:10:34.051943   77020 host.go:66] Checking if "multinode-508124-m02" exists ...
	I0321 22:10:34.052212   77020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:10:34.052242   77020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:10:34.065359   77020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38009
	I0321 22:10:34.065682   77020 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:10:34.066044   77020 main.go:141] libmachine: Using API Version  1
	I0321 22:10:34.066063   77020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:10:34.066355   77020 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:10:34.066503   77020 main.go:141] libmachine: (multinode-508124-m02) Calling .GetIP
	I0321 22:10:34.068827   77020 main.go:141] libmachine: (multinode-508124-m02) DBG | domain multinode-508124-m02 has defined MAC address 52:54:00:13:f2:d3 in network mk-multinode-508124
	I0321 22:10:34.069229   77020 main.go:141] libmachine: (multinode-508124-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:f2:d3", ip: ""} in network mk-multinode-508124: {Iface:virbr1 ExpiryTime:2023-03-21 23:08:21 +0000 UTC Type:0 Mac:52:54:00:13:f2:d3 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-508124-m02 Clientid:01:52:54:00:13:f2:d3}
	I0321 22:10:34.069252   77020 main.go:141] libmachine: (multinode-508124-m02) DBG | domain multinode-508124-m02 has defined IP address 192.168.39.9 and MAC address 52:54:00:13:f2:d3 in network mk-multinode-508124
	I0321 22:10:34.069396   77020 host.go:66] Checking if "multinode-508124-m02" exists ...
	I0321 22:10:34.069678   77020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:10:34.069717   77020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:10:34.083058   77020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38835
	I0321 22:10:34.083363   77020 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:10:34.083761   77020 main.go:141] libmachine: Using API Version  1
	I0321 22:10:34.083778   77020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:10:34.084094   77020 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:10:34.084278   77020 main.go:141] libmachine: (multinode-508124-m02) Calling .DriverName
	I0321 22:10:34.084438   77020 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0321 22:10:34.084457   77020 main.go:141] libmachine: (multinode-508124-m02) Calling .GetSSHHostname
	I0321 22:10:34.086797   77020 main.go:141] libmachine: (multinode-508124-m02) DBG | domain multinode-508124-m02 has defined MAC address 52:54:00:13:f2:d3 in network mk-multinode-508124
	I0321 22:10:34.087163   77020 main.go:141] libmachine: (multinode-508124-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:13:f2:d3", ip: ""} in network mk-multinode-508124: {Iface:virbr1 ExpiryTime:2023-03-21 23:08:21 +0000 UTC Type:0 Mac:52:54:00:13:f2:d3 Iaid: IPaddr:192.168.39.9 Prefix:24 Hostname:multinode-508124-m02 Clientid:01:52:54:00:13:f2:d3}
	I0321 22:10:34.087201   77020 main.go:141] libmachine: (multinode-508124-m02) DBG | domain multinode-508124-m02 has defined IP address 192.168.39.9 and MAC address 52:54:00:13:f2:d3 in network mk-multinode-508124
	I0321 22:10:34.087321   77020 main.go:141] libmachine: (multinode-508124-m02) Calling .GetSSHPort
	I0321 22:10:34.087480   77020 main.go:141] libmachine: (multinode-508124-m02) Calling .GetSSHKeyPath
	I0321 22:10:34.087651   77020 main.go:141] libmachine: (multinode-508124-m02) Calling .GetSSHUsername
	I0321 22:10:34.087794   77020 sshutil.go:53] new ssh client: &{IP:192.168.39.9 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/16124-57437/.minikube/machines/multinode-508124-m02/id_rsa Username:docker}
	I0321 22:10:34.168875   77020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0321 22:10:34.180152   77020 status.go:257] multinode-508124-m02 status: &{Name:multinode-508124-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0321 22:10:34.180170   77020 status.go:255] checking status of multinode-508124-m03 ...
	I0321 22:10:34.180468   77020 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:10:34.180507   77020 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:10:34.193952   77020 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38959
	I0321 22:10:34.194332   77020 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:10:34.194791   77020 main.go:141] libmachine: Using API Version  1
	I0321 22:10:34.194814   77020 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:10:34.195104   77020 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:10:34.195285   77020 main.go:141] libmachine: (multinode-508124-m03) Calling .GetState
	I0321 22:10:34.196711   77020 status.go:330] multinode-508124-m03 host status = "Stopped" (err=<nil>)
	I0321 22:10:34.196725   77020 status.go:343] host is not running, skipping remaining checks
	I0321 22:10:34.196730   77020 status.go:257] multinode-508124-m03 status: &{Name:multinode-508124-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.09s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (130.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:252: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 node start m03 --alsologtostderr
E0321 22:11:15.066271   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:11:38.897791   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 22:11:42.750481   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
multinode_test.go:252: (dbg) Done: out/minikube-linux-amd64 -p multinode-508124 node start m03 --alsologtostderr: (2m9.867760959s)
multinode_test.go:259: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (130.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (536.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-508124
multinode_test.go:288: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-508124
E0321 22:13:01.945674   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 22:13:29.737510   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
multinode_test.go:288: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-508124: (3m14.540357496s)
multinode_test.go:293: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-508124 --wait=true -v=8 --alsologtostderr
E0321 22:16:15.066410   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:16:38.897778   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 22:18:29.736812   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:19:52.786512   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:21:15.066562   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:21:38.897674   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
multinode_test.go:293: (dbg) Done: out/minikube-linux-amd64 start -p multinode-508124 --wait=true -v=8 --alsologtostderr: (5m41.481841745s)
multinode_test.go:298: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-508124
--- PASS: TestMultiNode/serial/RestartKeepsNodes (536.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-linux-amd64 -p multinode-508124 node delete m03: (1.540098726s)
multinode_test.go:398: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status --alsologtostderr
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (183.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 stop
E0321 22:22:38.110968   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:23:29.736443   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
multinode_test.go:312: (dbg) Done: out/minikube-linux-amd64 -p multinode-508124 stop: (3m3.194206819s)
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-508124 status: exit status 7 (80.241653ms)

                                                
                                                
-- stdout --
	multinode-508124
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-508124-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-508124 status --alsologtostderr: exit status 7 (78.335465ms)

                                                
                                                
-- stdout --
	multinode-508124
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-508124-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 22:24:46.197555   78676 out.go:296] Setting OutFile to fd 1 ...
	I0321 22:24:46.197668   78676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:24:46.197676   78676 out.go:309] Setting ErrFile to fd 2...
	I0321 22:24:46.197680   78676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:24:46.197778   78676 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
	I0321 22:24:46.197908   78676 out.go:303] Setting JSON to false
	I0321 22:24:46.197946   78676 mustload.go:65] Loading cluster: multinode-508124
	I0321 22:24:46.198042   78676 notify.go:220] Checking for updates...
	I0321 22:24:46.198309   78676 config.go:182] Loaded profile config "multinode-508124": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0321 22:24:46.198325   78676 status.go:255] checking status of multinode-508124 ...
	I0321 22:24:46.198655   78676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:24:46.198708   78676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:24:46.212569   78676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33347
	I0321 22:24:46.212886   78676 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:24:46.213404   78676 main.go:141] libmachine: Using API Version  1
	I0321 22:24:46.213431   78676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:24:46.213754   78676 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:24:46.213938   78676 main.go:141] libmachine: (multinode-508124) Calling .GetState
	I0321 22:24:46.215382   78676 status.go:330] multinode-508124 host status = "Stopped" (err=<nil>)
	I0321 22:24:46.215395   78676 status.go:343] host is not running, skipping remaining checks
	I0321 22:24:46.215401   78676 status.go:257] multinode-508124 status: &{Name:multinode-508124 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0321 22:24:46.215445   78676 status.go:255] checking status of multinode-508124-m02 ...
	I0321 22:24:46.215705   78676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_containerd_integration/out/docker-machine-driver-kvm2
	I0321 22:24:46.215752   78676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0321 22:24:46.228676   78676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45511
	I0321 22:24:46.228944   78676 main.go:141] libmachine: () Calling .GetVersion
	I0321 22:24:46.229327   78676 main.go:141] libmachine: Using API Version  1
	I0321 22:24:46.229348   78676 main.go:141] libmachine: () Calling .SetConfigRaw
	I0321 22:24:46.229640   78676 main.go:141] libmachine: () Calling .GetMachineName
	I0321 22:24:46.229811   78676 main.go:141] libmachine: (multinode-508124-m02) Calling .GetState
	I0321 22:24:46.231185   78676 status.go:330] multinode-508124-m02 host status = "Stopped" (err=<nil>)
	I0321 22:24:46.231198   78676 status.go:343] host is not running, skipping remaining checks
	I0321 22:24:46.231207   78676 status.go:257] multinode-508124-m02 status: &{Name:multinode-508124-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (183.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (237.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:352: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-508124 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E0321 22:26:15.066393   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:26:38.897724   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 22:28:29.736564   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
multinode_test.go:352: (dbg) Done: out/minikube-linux-amd64 start -p multinode-508124 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (3m57.075407356s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-508124 status --alsologtostderr
multinode_test.go:372: (dbg) Run:  kubectl get nodes
multinode_test.go:380: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (237.59s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (57.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-508124
multinode_test.go:450: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-508124-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-508124-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (64.740944ms)

                                                
                                                
-- stdout --
	* [multinode-508124-m02] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-508124-m02' is duplicated with machine name 'multinode-508124-m02' in profile 'multinode-508124'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-508124-m03 --driver=kvm2  --container-runtime=containerd
multinode_test.go:458: (dbg) Done: out/minikube-linux-amd64 start -p multinode-508124-m03 --driver=kvm2  --container-runtime=containerd: (56.302487009s)
multinode_test.go:465: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-508124
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-508124: exit status 80 (211.949046ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-508124
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-508124-m03 already exists in multinode-508124-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-508124-m03
multinode_test.go:470: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-508124-m03: (1.073007302s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (57.70s)

                                                
                                    
x
+
TestScheduledStopUnix (131.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-200812 --memory=2048 --driver=kvm2  --container-runtime=containerd
E0321 22:36:32.789065   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:36:38.897763   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-200812 --memory=2048 --driver=kvm2  --container-runtime=containerd: (59.504439483s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-200812 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-200812 -n scheduled-stop-200812
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-200812 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-200812 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-200812 -n scheduled-stop-200812
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-200812
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-200812 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-200812
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-200812: exit status 7 (64.173124ms)

                                                
                                                
-- stdout --
	scheduled-stop-200812
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-200812 -n scheduled-stop-200812
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-200812 -n scheduled-stop-200812: exit status 7 (63.885277ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-200812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-200812
--- PASS: TestScheduledStopUnix (131.20s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (156.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:128: (dbg) Run:  /tmp/minikube-v1.22.0.1288714592.exe start -p running-upgrade-675432 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:128: (dbg) Done: /tmp/minikube-v1.22.0.1288714592.exe start -p running-upgrade-675432 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (1m39.272986238s)
version_upgrade_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-675432 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-675432 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (55.150965068s)
helpers_test.go:175: Cleaning up "running-upgrade-675432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-675432
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-675432: (1.551992058s)
--- PASS: TestRunningBinaryUpgrade (156.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (229.94s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-621345 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:230: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-621345 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m29.108302476s)
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-621345
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-621345: (2.346568848s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-621345 status --format={{.Host}}
version_upgrade_test.go:240: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-621345 status --format={{.Host}}: exit status 7 (71.998533ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:242: status error: exit status 7 (may be ok)
version_upgrade_test.go:251: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-621345 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E0321 22:41:38.898430   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
version_upgrade_test.go:251: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-621345 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m33.982347714s)
version_upgrade_test.go:256: (dbg) Run:  kubectl --context kubernetes-upgrade-621345 version --output=json
version_upgrade_test.go:275: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-621345 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-621345 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (212.027104ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-621345] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.26.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-621345
	    minikube start -p kubernetes-upgrade-621345 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6213452 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.26.2, by running:
	    
	    minikube start -p kubernetes-upgrade-621345 --kubernetes-version=v1.26.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:281: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:283: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-621345 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:283: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-621345 --memory=2200 --kubernetes-version=v1.26.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (42.959659745s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-621345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-621345
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-621345: (1.183773303s)
--- PASS: TestKubernetesUpgrade (229.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725948 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-725948 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=containerd: exit status 14 (78.47719ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-725948] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (132.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725948 --driver=kvm2  --container-runtime=containerd
E0321 22:38:29.736533   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:39:18.111769   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-725948 --driver=kvm2  --container-runtime=containerd: (2m12.127100889s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-725948 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (132.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (220.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:191: (dbg) Run:  /tmp/minikube-v1.22.0.3715428213.exe start -p stopped-upgrade-789378 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:191: (dbg) Done: /tmp/minikube-v1.22.0.3715428213.exe start -p stopped-upgrade-789378 --memory=2200 --vm-driver=kvm2  --container-runtime=containerd: (2m6.909532358s)
version_upgrade_test.go:200: (dbg) Run:  /tmp/minikube-v1.22.0.3715428213.exe -p stopped-upgrade-789378 stop
version_upgrade_test.go:200: (dbg) Done: /tmp/minikube-v1.22.0.3715428213.exe -p stopped-upgrade-789378 stop: (2.10469597s)
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-789378 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-789378 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m31.020758383s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (220.04s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (34.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725948 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-725948 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (33.295769874s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-725948 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-725948 status -o json: exit status 2 (232.095422ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-725948","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-725948
E0321 22:41:15.067136   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-725948: (1.080457189s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (34.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (30.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725948 --no-kubernetes --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-725948 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (30.867191669s)
--- PASS: TestNoKubernetes/serial/Start (30.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-725948 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-725948 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.277908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (6.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (3.660277373s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.040135398s)
--- PASS: TestNoKubernetes/serial/ProfileList (6.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-725948
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-725948: (1.262090235s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (35.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-725948 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-725948 --driver=kvm2  --container-runtime=containerd: (35.137471742s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (35.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-725948 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-725948 "sudo systemctl is-active --quiet service kubelet": exit status 1 (212.951895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestPause/serial/Start (112.61s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-911439 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
E0321 22:43:29.736666   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-911439 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m52.608128541s)
--- PASS: TestPause/serial/Start (112.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:214: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-789378
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:230: (dbg) Run:  out/minikube-linux-amd64 start -p false-577737 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-577737 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (552.808654ms)

                                                
                                                
-- stdout --
	* [false-577737] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=16124
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0321 22:44:52.372662   86193 out.go:296] Setting OutFile to fd 1 ...
	I0321 22:44:52.372806   86193 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:44:52.372815   86193 out.go:309] Setting ErrFile to fd 2...
	I0321 22:44:52.372819   86193 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0321 22:44:52.372919   86193 root.go:336] Updating PATH: /home/jenkins/minikube-integration/16124-57437/.minikube/bin
	I0321 22:44:52.373602   86193 out.go:303] Setting JSON to false
	I0321 22:44:52.374802   86193 start.go:125] hostinfo: {"hostname":"ubuntu-20-agent-12","uptime":12442,"bootTime":1679426250,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1030-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0321 22:44:52.374863   86193 start.go:135] virtualization: kvm guest
	I0321 22:44:52.425329   86193 out.go:177] * [false-577737] minikube v1.29.0 on Ubuntu 20.04 (kvm/amd64)
	I0321 22:44:52.598080   86193 notify.go:220] Checking for updates...
	I0321 22:44:52.733300   86193 out.go:177]   - MINIKUBE_LOCATION=16124
	I0321 22:44:52.816616   86193 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0321 22:44:52.818677   86193 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/16124-57437/kubeconfig
	I0321 22:44:52.820479   86193 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/16124-57437/.minikube
	I0321 22:44:52.822224   86193 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0321 22:44:52.823881   86193 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0321 22:44:52.826245   86193 config.go:182] Loaded profile config "force-systemd-flag-141887": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0321 22:44:52.826365   86193 config.go:182] Loaded profile config "pause-911439": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.26.2
	I0321 22:44:52.826437   86193 config.go:182] Loaded profile config "running-upgrade-675432": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.21.2
	I0321 22:44:52.826476   86193 driver.go:365] Setting default libvirt URI to qemu:///system
	I0321 22:44:52.866665   86193 out.go:177] * Using the kvm2 driver based on user configuration
	I0321 22:44:52.868066   86193 start.go:295] selected driver: kvm2
	I0321 22:44:52.868081   86193 start.go:856] validating driver "kvm2" against <nil>
	I0321 22:44:52.868094   86193 start.go:867] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0321 22:44:52.870098   86193 out.go:177] 
	W0321 22:44:52.871620   86193 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0321 22:44:52.873090   86193 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:86: 
----------------------- debugLogs start: false-577737 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-577737" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:27 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.61.216:8443
name: pause-911439
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:48 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.72.186:8443
name: running-upgrade-675432
contexts:
- context:
cluster: pause-911439
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:27 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: pause-911439
name: pause-911439
- context:
cluster: running-upgrade-675432
user: running-upgrade-675432
name: running-upgrade-675432
current-context: running-upgrade-675432
kind: Config
preferences: {}
users:
- name: pause-911439
user:
client-certificate: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/pause-911439/client.crt
client-key: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/pause-911439/client.key
- name: running-upgrade-675432
user:
client-certificate: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/running-upgrade-675432/client.crt
client-key: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/running-upgrade-675432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-577737

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-577737"

                                                
                                                
----------------------- debugLogs end: false-577737 [took: 2.857063263s] --------------------------------
helpers_test.go:175: Cleaning up "false-577737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-577737
--- PASS: TestNetworkPlugins/group/false (3.84s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (319.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-538335 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-538335 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (5m19.821356019s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (319.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (138.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-636285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-636285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (2m18.548514531s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (138.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.49s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-911439 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-911439 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (47.472814253s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (152.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-615540 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-615540 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (2m32.765342801s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (152.77s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-911439 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-911439 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-911439 --output=json --layout=cluster: exit status 2 (277.726023ms)

                                                
                                                
-- stdout --
	{"Name":"pause-911439","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.29.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-911439","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.82s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-911439 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.82s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.82s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-911439 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.82s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-911439 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-911439 --alsologtostderr -v=5: (1.177869934s)
--- PASS: TestPause/serial/DeletePaused (1.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-644329 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
E0321 22:46:15.067126   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:46:21.947177   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
E0321 22:46:38.898203   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-644329 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (1m28.70594226s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-636285 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c81eeb88-bd9b-4d34-8206-7474bcae0fb9] Pending
helpers_test.go:344: "busybox" [c81eeb88-bd9b-4d34-8206-7474bcae0fb9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c81eeb88-bd9b-4d34-8206-7474bcae0fb9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.024406666s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-636285 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-644329 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [996c3fa8-87a9-49bf-901b-1820237c196e] Pending
helpers_test.go:344: "busybox" [996c3fa8-87a9-49bf-901b-1820237c196e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [996c3fa8-87a9-49bf-901b-1820237c196e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.023001821s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-644329 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-636285 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-636285 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (91.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-636285 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-636285 --alsologtostderr -v=3: (1m31.861199139s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (91.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-644329 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-644329 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (91.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-644329 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-644329 --alsologtostderr -v=3: (1m31.811268744s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (91.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-615540 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [414ce494-6d1f-4065-9dd9-3a2824b71465] Pending
helpers_test.go:344: "busybox" [414ce494-6d1f-4065-9dd9-3a2824b71465] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [414ce494-6d1f-4065-9dd9-3a2824b71465] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.018758987s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-615540 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-615540 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-615540 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (91.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-615540 --alsologtostderr -v=3
E0321 22:48:29.736823   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-615540 --alsologtostderr -v=3: (1m31.698465878s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (91.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-636285 -n no-preload-636285
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-636285 -n no-preload-636285: exit status 7 (65.869061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-636285 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (328.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-636285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-636285 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (5m28.400847041s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-636285 -n no-preload-636285
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (328.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329: exit status 7 (74.081416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-644329 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (423.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-644329 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-644329 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (7m3.212248943s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (423.59s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615540 -n embed-certs-615540
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615540 -n embed-certs-615540: exit status 7 (71.492391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-615540 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (677.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-615540 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-615540 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (11m17.074066604s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-615540 -n embed-certs-615540
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (677.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-538335 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1f6b149b-186b-46a8-805d-90fb3420e2c8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1f6b149b-186b-46a8-805d-90fb3420e2c8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.025614263s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-538335 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-538335 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-538335 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (91.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-538335 --alsologtostderr -v=3
E0321 22:51:15.067247   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:51:38.898468   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-538335 --alsologtostderr -v=3: (1m31.85344313s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (91.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-538335 -n old-k8s-version-538335
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-538335 -n old-k8s-version-538335: exit status 7 (65.738701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-538335 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (120.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-538335 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0
E0321 22:53:12.789427   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
E0321 22:53:29.736514   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/functional-062573/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-538335 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.16.0: (1m59.717526876s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-538335 -n old-k8s-version-538335
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (120.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (25.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7nbmg" [b932f956-a8d9-48e0-a312-20cbeeb9d272] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7nbmg" [b932f956-a8d9-48e0-a312-20cbeeb9d272] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 25.016343021s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (25.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-84b68f675b-7nbmg" [b932f956-a8d9-48e0-a312-20cbeeb9d272] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008768909s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-538335 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-538335 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-538335 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-538335 -n old-k8s-version-538335
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-538335 -n old-k8s-version-538335: exit status 2 (246.952335ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-538335 -n old-k8s-version-538335
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-538335 -n old-k8s-version-538335: exit status 2 (251.364239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-538335 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-538335 -n old-k8s-version-538335
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-538335 -n old-k8s-version-538335
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dt5tq" [3ac38851-1936-4102-8181-46f80994d0d5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.015243562s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (69.82s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-541801 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-541801 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (1m9.815479572s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (69.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-dt5tq" [3ac38851-1936-4102-8181-46f80994d0d5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009656865s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-636285 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-636285 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-636285 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-636285 -n no-preload-636285
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-636285 -n no-preload-636285: exit status 2 (278.445275ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-636285 -n no-preload-636285
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-636285 -n no-preload-636285: exit status 2 (260.961654ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-636285 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-636285 -n no-preload-636285
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-636285 -n no-preload-636285
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (125.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p auto-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
E0321 22:55:23.252979   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:23.258289   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:23.268554   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:23.288836   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:23.329095   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:23.409416   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:23.569898   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:23.890529   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:24.531452   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:25.812306   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:28.372497   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:33.493336   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:55:43.734047   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p auto-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (2m5.649797622s)
--- PASS: TestNetworkPlugins/group/auto/Start (125.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-541801 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-541801 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.115057295s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-541801 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-541801 --alsologtostderr -v=3: (3.128080745s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-541801 -n newest-cni-541801
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-541801 -n newest-cni-541801: exit status 7 (82.782753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-541801 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (72.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-541801 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2
E0321 22:55:58.112806   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
E0321 22:56:04.215143   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:56:15.066782   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/ingress-addon-legacy-557517/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-541801 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.26.2: (1m12.635160695s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-541801 -n newest-cni-541801
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (72.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-vlt6x" [91f49421-9861-428f-a9ce-c79a38d81aeb] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-vlt6x" [91f49421-9861-428f-a9ce-c79a38d81aeb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 14.04718722s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (14.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-vlt6x" [91f49421-9861-428f-a9ce-c79a38d81aeb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007728451s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-644329 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-diff-port-644329 "sudo crictl images -o json"
E0321 22:56:38.898151   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/addons-248329/client.crt: no such file or directory
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-644329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329: exit status 2 (242.693606ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329: exit status 2 (238.207777ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-644329 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-644329 -n default-k8s-diff-port-644329
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (80.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
E0321 22:56:45.176087   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m20.063535861s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (80.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-577737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context auto-577737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-t2tm9" [d2b20e5c-60b0-457a-ab94-4c5e4f32cf0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-t2tm9" [d2b20e5c-60b0-457a-ab94-4c5e4f32cf0f] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.010097579s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-541801 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-541801 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-541801 -n newest-cni-541801
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-541801 -n newest-cni-541801: exit status 2 (247.762254ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-541801 -n newest-cni-541801
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-541801 -n newest-cni-541801: exit status 2 (241.545612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-541801 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-541801 -n newest-cni-541801
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-541801 -n newest-cni-541801
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:174: (dbg) Run:  kubectl --context auto-577737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:193: (dbg) Run:  kubectl --context auto-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:248: (dbg) Run:  kubectl --context auto-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (98.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p calico-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p calico-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m38.839200399s)
--- PASS: TestNetworkPlugins/group/calico/Start (98.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (111.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
E0321 22:57:26.148475   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:26.153711   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:26.163933   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:26.184187   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:26.224458   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:26.304749   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:26.465856   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:26.786839   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:27.427938   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:28.708838   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:31.269563   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:34.167778   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:34.173022   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:34.183256   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:34.203488   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:34.243729   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:34.324097   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:34.484524   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:34.805140   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:35.445513   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:36.390743   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:36.726226   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:39.287289   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:44.408044   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
E0321 22:57:46.631029   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
E0321 22:57:54.649130   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m51.440808436s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (111.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sk9rv" [8a8a2e87-bbff-47c1-9cdc-5a12432b415f] Running
E0321 22:58:07.096585   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
E0321 22:58:07.111387   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.018964969s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-577737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context kindnet-577737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-qfthg" [7c341bbf-fd9d-437d-8cfd-e9b9e9b57af6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-qfthg" [7c341bbf-fd9d-437d-8cfd-e9b9e9b57af6] Running
E0321 22:58:15.129529   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.089947721s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:174: (dbg) Run:  kubectl --context kindnet-577737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:193: (dbg) Run:  kubectl --context kindnet-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:248: (dbg) Run:  kubectl --context kindnet-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (71.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m11.960580436s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (71.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vb476" [b61b302d-d0d6-4506-8fce-14a19ec34094] Running
E0321 22:58:48.071631   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/no-preload-636285/client.crt: no such file or directory
net_test.go:119: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.021082363s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-577737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context calico-577737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-p9r4c" [d61d6e32-5506-432d-96f6-e48bdbd5a8d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0321 22:58:56.090054   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/default-k8s-diff-port-644329/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-p9r4c" [d61d6e32-5506-432d-96f6-e48bdbd5a8d0] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.021202554s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:174: (dbg) Run:  kubectl --context calico-577737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:193: (dbg) Run:  kubectl --context calico-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:248: (dbg) Run:  kubectl --context calico-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-577737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context custom-flannel-577737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-f2gfl" [1c792a88-343d-4445-935d-e687a9acb98a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-f2gfl" [1c792a88-343d-4445-935d-e687a9acb98a] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.008508194s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (91.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p flannel-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m31.883368792s)
--- PASS: TestNetworkPlugins/group/flannel/Start (91.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context custom-flannel-577737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context custom-flannel-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context custom-flannel-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (130.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:111: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:111: (dbg) Done: out/minikube-linux-amd64 start -p bridge-577737 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (2m10.439193621s)
--- PASS: TestNetworkPlugins/group/bridge/Start (130.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-577737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context enable-default-cni-577737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-8c27d" [0db9163a-e3c5-4912-8320-dbe8d01cc24d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-8c27d" [0db9163a-e3c5-4912-8320-dbe8d01cc24d] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.009605129s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:174: (dbg) Run:  kubectl --context enable-default-cni-577737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:193: (dbg) Run:  kubectl --context enable-default-cni-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:248: (dbg) Run:  kubectl --context enable-default-cni-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-pm72g" [248ff046-d97d-4a97-9a7b-eea366810c4a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.018021318s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-55c4cbbc7c-pm72g" [248ff046-d97d-4a97-9a7b-eea366810c4a] Running
E0321 23:00:50.937286   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/old-k8s-version-538335/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.009467319s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-615540 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-615540 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20221004-44d545d1
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-615540 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-615540 -n embed-certs-615540
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-615540 -n embed-certs-615540: exit status 2 (238.629531ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-615540 -n embed-certs-615540
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-615540 -n embed-certs-615540: exit status 2 (233.854038ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-615540 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-615540 -n embed-certs-615540
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-615540 -n embed-certs-615540
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nnhz9" [226f0bb6-8ed6-41f6-9267-2fa93c0b9dce] Running
net_test.go:119: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.018090231s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-577737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context flannel-577737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-8gjmz" [88b6f07f-49f3-41cf-a846-6f78fd1a38a3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-694fc96674-8gjmz" [88b6f07f-49f3-41cf-a846-6f78fd1a38a3] Running
net_test.go:162: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.006862966s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:174: (dbg) Run:  kubectl --context flannel-577737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:193: (dbg) Run:  kubectl --context flannel-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:248: (dbg) Run:  kubectl --context flannel-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:132: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-577737 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:148: (dbg) Run:  kubectl --context bridge-577737 replace --force -f testdata/netcat-deployment.yaml
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-694fc96674-l2nrh" [5aa476f7-fea6-4320-a7fb-ec99d5618229] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0321 23:01:57.961116   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:01:57.966396   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:01:57.976660   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:01:57.996906   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:01:58.037167   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:01:58.117419   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:01:58.277799   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:01:58.598418   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:01:59.239400   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
E0321 23:02:00.520216   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
helpers_test.go:344: "netcat-694fc96674-l2nrh" [5aa476f7-fea6-4320-a7fb-ec99d5618229] Running
E0321 23:02:03.081361   64498 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/auto-577737/client.crt: no such file or directory
net_test.go:162: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.0080823s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:174: (dbg) Run:  kubectl --context bridge-577737 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:193: (dbg) Run:  kubectl --context bridge-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:248: (dbg) Run:  kubectl --context bridge-577737 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (34/297)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.26.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.26.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.26.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.26.2/kubectl
aaa_download_only_test.go:156: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.26.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:214: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:463: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:458: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:545: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:88: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:292: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-296384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-296384
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (6.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:92: Skipping the test as containerd container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-577737 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-577737" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:27 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.61.216:8443
name: pause-911439
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:48 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.72.186:8443
name: running-upgrade-675432
contexts:
- context:
cluster: pause-911439
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:27 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: pause-911439
name: pause-911439
- context:
cluster: running-upgrade-675432
user: running-upgrade-675432
name: running-upgrade-675432
current-context: running-upgrade-675432
kind: Config
preferences: {}
users:
- name: pause-911439
user:
client-certificate: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/pause-911439/client.crt
client-key: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/pause-911439/client.key
- name: running-upgrade-675432
user:
client-certificate: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/running-upgrade-675432/client.crt
client-key: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/running-upgrade-675432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-577737

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-577737"

                                                
                                                
----------------------- debugLogs end: kubenet-577737 [took: 5.472637285s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-577737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-577737
--- SKIP: TestNetworkPlugins/group/kubenet (6.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:101: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-577737 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-577737" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:27 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.61.216:8443
name: pause-911439
- cluster:
certificate-authority: /home/jenkins/minikube-integration/16124-57437/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:48 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: cluster_info
server: https://192.168.72.186:8443
name: running-upgrade-675432
contexts:
- context:
cluster: pause-911439
extensions:
- extension:
last-update: Tue, 21 Mar 2023 22:44:27 UTC
provider: minikube.sigs.k8s.io
version: v1.29.0
name: context_info
namespace: default
user: pause-911439
name: pause-911439
- context:
cluster: running-upgrade-675432
user: running-upgrade-675432
name: running-upgrade-675432
current-context: running-upgrade-675432
kind: Config
preferences: {}
users:
- name: pause-911439
user:
client-certificate: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/pause-911439/client.crt
client-key: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/pause-911439/client.key
- name: running-upgrade-675432
user:
client-certificate: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/running-upgrade-675432/client.crt
client-key: /home/jenkins/minikube-integration/16124-57437/.minikube/profiles/running-upgrade-675432/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-577737

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-577737" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-577737"

                                                
                                                
----------------------- debugLogs end: cilium-577737 [took: 3.630923837s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-577737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-577737
--- SKIP: TestNetworkPlugins/group/cilium (4.05s)

                                                
                                    
Copied to clipboard